Meta: Llama 3.1 70B Instruct

Public pricingIntelligence 74/100Medium memoryUso di strumenti

Meta: Llama 3.1 70B Instruct è un modello testo pensato per chat generale, analisi e uso in produzione. Unisce prestazioni generali stabili, un contesto di 131K tokens e un profilo a costo equilibrato per un lavoro affidabile in general chat, analysis, and production workloads.

Input

$0.40/1M

Output

$0.40/1M

Cached

$0.10/1M

Batch

$0.20/1M

Calculate your Llama 3.1 70B Instruct bill.

Set your workload — see cost at your exact volume.

What would Llama 3.1 70B Instruct cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Llama 3.1 70B Instruct at a glance.

Memory

131,072

tokens

Max reply

16,384

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

llama3

Released

Jul 2024

Training cutoff

Dec 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • chatbot_arena_elo

    1248
  • gpqa_diamond

    48
  • humaneval

    80.5
  • ifeval

    87.5
  • math

    68
  • mmlu

    86
  • mmlu_pro

    66.4

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Llama 3.1 70B Instruct

  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Llama 3.1 70B Instruct — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Llama 3.1 70B Instruct costs roughly $46 per month. Input is $0.40 /1M tokens and output is $0.40 /1M tokens.
Llama 3.1 70B Instruct has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, Llama 3.1 70B Instruct supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Llama 3.1 70B Instruct was released in July 2024, with training data cut off around December 2023.
Models in a similar class include Llama Guard 3 8B, Llama 3 70B Instruct, Llama 3.2 11B Vision Instruct. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Llama 3.1 70B Instruct against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.