Meta: Llama 3.2 3B Instruct

Public pricingIntelligence 65/100Medium memoryOutils

Meta: Llama 3.2 3B Instruct est un modèle texte conçu pour raisonnement et résolution de problèmes. Il associe un raisonnement et une planification poussés, un contexte de 80K tokens et un profil à faible coût pour un travail fiable sur reasoning, analysis, and hard problem solving.

Input

$0.05/1M

Output

$0.34/1M

Cached

$0.01/1M

Batch

$0.01/1M

Calculate your Llama 3.2 3B Instruct bill.

Set your workload — see cost at your exact volume.

What would Llama 3.2 3B Instruct cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Llama 3.2 3B Instruct at a glance.

Memory

80,000

tokens

Max reply

4,096

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

llama3

Released

Sep 2024

Training cutoff

Dec 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • bbh

    24.06
  • gpqa_diamond

    32.8
  • ifeval

    77.4
  • math

    48
  • mmlu

    63.4
  • mmlu_pro

    24.39

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Llama 3.2 3B Instruct

  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Llama 3.2 3B Instruct — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Llama 3.2 3B Instruct costs roughly $17 per month. Input is $0.05 /1M tokens and output is $0.34 /1M tokens.
Llama 3.2 3B Instruct has a 80,000-token context window (medium memory — a long report or a codebase file). That means you can fit about 15,000 words of input and history in a single call.
Beyond text generation, Llama 3.2 3B Instruct supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Llama 3.2 3B Instruct was released in September 2024, with training data cut off around December 2023.
Models in a similar class include Llama 3 8B Instruct, Llama 3.2 1B Instruct, Llama 4 Scout. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Llama 3.2 3B Instruct against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.