AI21 Labs Models API Cost Calculator & Comparison

Every AI21 Labs model, side by side — current API rates, context window, benchmarks, and a live calculator that ranks them at your exact workload. 1 active model, 1 with public pricing. Prices refreshed daily.

Models tracked

1

Active

1

With public pricing

1

Cheapest input

$2.00/1M

Calculate your AI21 Labs API cost at your workload.

Set your workload — every priced model ranks in real time.

Adjust the workload

Every model below updates in real time.

1,00010,00050,000250,0001M10M

Ranked by your monthly bill

No models with public pricing available to compare right now.

Pricing at a glance

Blended $/1M tokens across the lineup.

Blended price uses a 3-to-1 input/output ratio. Green bar = cheapest.

Quality vs price

AI21 Labs benchmarks at a glance.

Each point is one model — X is blended $/1M tokens, Y is the average of available quality benchmarks. Larger bubbles mean larger context windows.

Per-model benchmark scores

ModelAvgScores
Jamba Large 1.711.0
AA Intelligence Index11

Every model

Every AI21 Labs model — pricing, context & capabilities.

ModelContextInput /1MOutput /1M
Jamba Large 1.7256K$2.00$8.00

FAQ

AI21 Labs — questions we see most.

Pricing patterns, best-known use cases, and how this provider stacks up.

Get instant answers from our AI agent

AI21 Labs API pricing ranges starting at $2.00 per 1M input tokens. Output tokens cost more than input on every model. Prices are per 1 million tokens (1M ≈ 750,000 words). Use the calculator above to estimate your monthly spend at your actual workload.
Jamba Large 1.7 is the lowest-priced AI21 Labs model with public pricing at $2.00/1M input tokens. It suits high-volume tasks where cost matters most — classification, extraction, summarization, and similar workloads that don't need frontier reasoning.
Jamba Large 1.7 is AI21 Labs's highest-tier model at $2.00/1M input. It delivers the most sophisticated reasoning, instruction-following, and nuance. For workloads that don't require frontier performance, a mid-tier model typically cuts inference costs substantially.
Jamba Large 1.7 support function calling (tool use), required for agentic workflows. Agents need a model that reliably follows structured output schemas — test with your specific tool definitions before committing to production volumes.
Yes — AI21 Labs supports prompt caching (discounts for repeated context) and batch processing (accept a delay, cut costs ~50%). These rates appear in the table above under "Cached /1M" and "Batch /1M." Caching pays off quickly if your prompts share a long system prompt or document prefix across many calls.
AI21 Labs has historically adjusted prices when launching new model generations, often cutting rates to stay competitive. Buzzi.ai snapshots pricing daily — you can subscribe to price-drop alerts on any AI21 Labs model using the "Alert me" button on its detail page.
Use the main comparison wizard to run the same calculator across AI21 Labs, Anthropic, Google, Meta, Mistral, and 20+ other providers. Set your exact workload and get a ranked cost chart in under a minute.

Look wider

Compare AI21 Labs against other providers.

Open the full wizard — pick a use case, set your usage, and cross-compare against OpenAI, Anthropic, Google, and 20+ more.