Meta

LLaMA 3.2 90B

Estimated pricingMedium memoryVision

LLaMA 3.2 90B is Meta's medium-memory model with vision. This page shows current pricing, an interactive cost calculator, and a side-by-side with similar models.

Input

$0.60/1M tokens

Output

$1.80/1M tokens

Cached

β€”

Batch

β€”

Interactive

Calculate your LLaMA 3.2 90B bill.

Adjust the workload below and watch the monthly cost update in real time.

What would LLaMA 3.2 90B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

LLaMA 3.2 90B at a glance.

Memory

128,000

tokens

Max reply

8,192

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

sentencepiece

Released

β€”

Training cutoff

β€”

Availability

Estimated

Status

active

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick LLaMA 3.2 90B

  • Screenshot analysis, image understanding, or document OCR.
  • High-volume workloads where unit cost matters.
  • Multimodal pipelines mixing text + images.

When to look elsewhere

  • You need tool-use / function calling for agent workflows.

FAQ

LLaMA 3.2 90B β€” the questions we see most.

Pricing, capabilities, alternatives β€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, LLaMA 3.2 90B costs roughly $117 per month. Input is $0.60 /1M tokens and output is $1.80 /1M tokens.
LLaMA 3.2 90B has a 128,000-token context window (medium memory β€” a long report or a codebase file). That means you can fit about 24,000 words of input and history in a single call.
Beyond text generation, LLaMA 3.2 90B supports understanding images. It streams replies by default.
Models in a similar class include LLaMA 4 Scout, LLaMA 4 Maverick, Gemini 2.0 Pro. The "Similar models" section below this FAQ links into each.
Open-weight model β€” price from a common hosting provider (Together, Fireworks, Replicate). We source estimates from the cheapest public hosting provider for that model and note it on the page.

Still unsure?

Compare LLaMA 3.2 90B against 100+ other models.

Open the full wizard β€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.