Anthropic Models API Cost Calculator & Comparison

Every Anthropic model, side by side — current API rates, context window, benchmarks, and a live calculator that ranks them at your exact workload. 14 active models, 14 with public pricing. Prices refreshed daily.

Models tracked

14

Active

14

With public pricing

14

Cheapest input

$0.25/1M

Calculate your Anthropic API cost at your workload.

Set your workload — every priced model ranks in real time.

Adjust the workload

Every model below updates in real time.

1,00010,00050,000250,0001M10M

Ranked by your monthly bill

No models with public pricing available to compare right now.

Pricing at a glance

Blended $/1M tokens across the lineup.

Blended price uses a 3-to-1 input/output ratio. Green bar = cheapest.

Quick picks

Best Anthropic model for your use case.

As of April 2026, Anthropic offers 14 active models via API, ranging from $0.25/1M to $30.00/1M input tokens. The most context-rich model handles up to 1M tokens. Models support vision, deep reasoning, tool use. All prices are USD per 1 million tokens.

Quality vs price

Anthropic benchmarks at a glance.

Each point is one model — X is blended $/1M tokens, Y is the average of available quality benchmarks. Larger bubbles mean larger context windows.

Per-model benchmark scores

ModelAvgScores
Claude Opus 4.768.2
GPQA Diamond94.2SWE-Bench Verified87.6FrontierMath Tier-422.9
Claude 3.7 Sonnet (thinking)64.9
MMLU86.1GPQA Diamond78SWE-Bench Verified70.3MATH82Humanity's Last Exam8.0
Claude 3.7 Sonnet63.3
MMLU86.1GPQA Diamond78SWE-Bench Verified62.3MATH82Humanity's Last Exam8.0
Claude 3 Haiku61.9
MMLU75.2GPQA Diamond33.3HumanEval75.9Chatbot Arena Elo1179
Claude Opus 4.159.5
MMLU Pro89.5GPQA Diamond81SWE-Bench Verified74.5AIME 202578SciPredict22.2Humanity's Last Exam11.5
Claude Opus 4.657.4
GPQA Diamond91.3SWE-Bench Verified80.8FrontierMath Tier-422.9Humanity's Last Exam34.4
Claude Sonnet 4.656.8
SWE-Bench Verified79.6AA Intelligence Index52FrontierMath Tier-48.3GPQA Diamond87.4
Claude 3.5 Haiku56.7
GPQA Diamond41.6HumanEval88SWE-Bench Verified40.6
Claude Opus 4.6 (Fast)56.6
SWE-Bench Verified78.7FrontierMath Tier-422.9GPQA Diamond90.5Humanity's Last Exam34.4
Claude Sonnet 456.4
GPQA Diamond75SWE-Bench Verified72.7AIME 202570Humanity's Last Exam7.8
Claude Sonnet 4.554.8
MMLU Pro89.1GPQA Diamond83.4SWE-Bench Verified77.2AIME 202587AA Intelligence Index61FrontierMath Tier-44.2SciPredict22.6Humanity's Last Exam13.7
Claude Haiku 4.552.1
SWE-Bench Verified73.3AA Intelligence Index31
Claude Opus 450.2
MMLU88GPQA Diamond79.6SWE-Bench Verified72.5FrontierMath Tier-40.0%Humanity's Last Exam10.7
Claude Opus 4.542.9
SWE-Bench Verified76.7FrontierMath Tier-44.2GPQA Diamond85.5SciPredict23.1Humanity's Last Exam25.2

Every model

Every Anthropic model — pricing, context & capabilities.

ModelContextInput /1MOutput /1M
Claude Opus 4.71M$5.00$25.00
Claude Sonnet 4.61M$3.00$15.00
Claude Opus 4.6 (Fast)1M$30.00$150.00
Claude Opus 4.61M$5.00$25.00
Claude Opus 4.5200K$5.00$25.00
Claude Haiku 4.5200K$1.00$5.00
Claude Sonnet 4.51M$3.00$15.00
Claude Opus 4.1200K$15.00$75.00
Claude Opus 4200K$15.00$75.00
Claude Sonnet 41M$3.00$15.00
Claude 3.7 Sonnet200K$3.00$15.00
Claude 3.7 Sonnet (thinking)200K$3.00$15.00
Claude 3.5 Haiku200K$0.8$4.00
Claude 3 Haiku200K$0.25$1.25

FAQ

Anthropic — questions we see most.

Pricing patterns, best-known use cases, and how this provider stacks up.

Get instant answers from our AI agent

Anthropic API pricing ranges from $0.25 to $30.00 per 1M input tokens. Output tokens cost more than input on every model. Prices are per 1 million tokens (1M ≈ 750,000 words). Use the calculator above to estimate your monthly spend at your actual workload.
Claude 3 Haiku is the lowest-priced Anthropic model with public pricing at $0.25/1M input tokens. It suits high-volume tasks where cost matters most — classification, extraction, summarization, and similar workloads that don't need frontier reasoning.
Claude Opus 4.6 (Fast) is Anthropic's highest-tier model at $30.00/1M input. It delivers the most sophisticated reasoning, instruction-following, and nuance. For workloads that don't require frontier performance, a mid-tier model typically cuts inference costs substantially.
Claude Opus 4.7, Claude Sonnet 4.6, Claude Opus 4.6 (Fast) and 8 more support deep reasoning mode, which improves performance on multi-step coding, debugging, and code review. For simpler autocomplete or snippet generation, a faster, cheaper model often delivers acceptable quality at a fraction of the cost.
Claude Opus 4.7, Claude Sonnet 4.6, Claude Opus 4.6 (Fast) and 11 more support function calling (tool use), required for agentic workflows. Agents need a model that reliably follows structured output schemas — test with your specific tool definitions before committing to production volumes.
Yes — Claude Opus 4.7, Claude Sonnet 4.6, Claude Opus 4.6 (Fast), Claude Opus 4.6 and 10 more accept image input alongside text. You can pass screenshots, photos, charts, and documents for analysis. Vision adds no separate line-item on most Anthropic models — you're billed for the token equivalent of the image.
Yes — Anthropic supports prompt caching (discounts for repeated context) and batch processing (accept a delay, cut costs ~50%). These rates appear in the table above under "Cached /1M" and "Batch /1M." Caching pays off quickly if your prompts share a long system prompt or document prefix across many calls.
Anthropic has historically adjusted prices when launching new model generations, often cutting rates to stay competitive. Buzzi.ai snapshots pricing daily — you can subscribe to price-drop alerts on any Anthropic model using the "Alert me" button on its detail page.
Use the main comparison wizard to run the same calculator across Anthropic, Anthropic, Google, Meta, Mistral, and 20+ other providers. Set your exact workload and get a ranked cost chart in under a minute.
Claude Opus 4.7, Claude Sonnet 4.6, Claude Opus 4.6 (Fast), Claude Opus 4.6 and 7 more offer an extended thinking or reasoning mode. The model spends extra compute "thinking" before answering — slower and more expensive, but meaningfully better on complex, multi-step problems. Standard mode is faster and cheaper for routine tasks.

Look wider

Compare Anthropic against other providers.

Open the full wizard — pick a use case, set your usage, and cross-compare against OpenAI, Anthropic, Google, and 20+ more.