Comparativa de precios LLM — encuentra el modelo de IA adecuado para tu proyecto.

Calculadora gratuita de costes de modelos de IA. Compara precios, ventana de contexto, benchmarks y capacidades de más de 300 LLM en producción.

  • Seguimos 330+ modelos de IA, actualizados cada mes.
  • Etiquetas en lenguaje claro — sin jerga técnica.
  • Comparte tus resultados con un enlace.

Qué muestran los datos

En April 2026, el modelo de IA de producción más barato en la base de Buzzi.ai es Arcee AI: Trinity Large Preview a $0.00 por millón de tokens de entrada. Seguimos 330 modelos por precio, benchmarks de calidad, ventana de contexto y residencia de datos — actualizado diariamente.

How it works

Three quick questions. A real cost number in return.

No sign-up, no spreadsheet, no jargon. Built for founders, product teams, and engineers who need an answer in under a minute.

  1. 01

    Pick a scenario.

    Tell us what you want AI to do — chat, code, extract data, understand images, reason, or bulk processing. We filter the model list to what matters.

  2. 02

    Set your usage.

    Share a rough sense of volume and message length. No tokens, no math — plain English with anchors like "a side project" or "production scale."

  3. 03

    Compare real costs.

    Every model card shows your personalized monthly cost. Side-by-side bars surface the cheapest pick and how much you’d save by switching.

What we track

One database. Every provider worth watching.

330 production-ready models across 54 providers — pricing, context window, benchmarks, regions, and compliance on every row. Refreshed each morning from official pricing pages, cross-checked against third-party aggregators.

Production-ready models

330

tracked

Providers covered

54

worldwide

Quality benchmarks

1

per model

Refresh cadence

Daily

price sync

Priced today

The latest flagship model from every major lab.

Prices are per 1 million tokens. Cached and batched rates apply when you reuse prompts or accept a delay. Click a row to open the full model page.

ProviderModelContextInput /1MOutput /1M
OpenAIOpenAI: o1-pro200K$150.00$600.00
OpenAIOpenAI: GPT-5.4 Pro1.1M$30.00$180.00
OpenAIOpenAI: GPT-5.2 Pro400K$21.00$168.00
AnthropicAnthropic: Claude Opus 4.6 (Fast)1M$30.00$150.00
AnthropicAnthropic: Claude Opus 4200K$15.00$75.00
AnthropicAnthropic: Claude Opus 4.1200K$15.00$75.00
GoogleGoogle: Gemini 3.1 Pro Preview1.0M$2.00$12.00
GoogleGoogle: Gemini 3.1 Pro Preview Custom Tools1.0M$2.00$12.00
GoogleGoogle: Nano Banana Pro (Gemini 3 Pro Image Preview)66K$2.00$12.00
AlibabaTongyi DeepResearch 30B A3B131K$0.09$0.45
DeepSeekDeepSeek: R1 0528164K$0.5$2.15
DeepSeekDeepSeek: DeepSeek V3.2 Speciale164K$0.4$1.20
DeepSeekDeepSeek: DeepSeek V3164K$0.32$0.89
AmazonAmazon: Nova Premier 1.01M$2.50$12.50
AmazonAmazon: Nova Pro 1.0300K$0.8$3.20
AmazonAmazon: Nova 2 Lite1M$0.3$2.50
NVIDIANVIDIA: Llama 3.1 Nemotron 70B Instruct131K$1.20$1.20
NVIDIANVIDIA: Nemotron Nano 12B 2 VL131K$0.2$0.6
NVIDIANVIDIA: Nemotron 3 Super262K$0.09$0.45
MiniMaxMiniMax: MiniMax M11M$0.4$2.20
MiniMaxMiniMax: MiniMax M2-her66K$0.3$1.20
MiniMaxMiniMax: MiniMax M2.7197K$0.3$1.20

Top 3 priced models per provider by list price. Prices refreshed daily from each provider’s public pricing page.

Beyond sticker price

Five calculators that sit behind the main flow.

Once you’ve narrowed down, dig deeper — migration math, real-prompt costs, curated stacks, lifecycle risk, and compliance.

  • Switch cost calculator

    Before you migrate, see how migration engineering hours weigh against the monthly savings over 12 months.

  • Prompt cost

    Paste a real prompt and reply. Get a per-provider cost at today’s rates, tokenized with the right family coefficient.

  • Model stacks

    Editorial picks for budget, balanced, and frontier use — curated by our applied-AI team, refreshed monthly.

  • Lifecycle timeline

    Which models are sunsetting, when, and what the provider is pushing customers toward.

  • Compliance matrix

    Regions and certifications (SOC 2, HIPAA, GDPR, FedRAMP) per provider, in one grid.

FAQ

Questions we get asked most.

Pricing freshness, sourcing, cache and batch discounts, embedding, alerts — all the things teams ask before picking a model.

Get instant answers from our AI agent

As of April 2026, the lowest input $/1M on our comparison is Arcee AI: Trinity Large Preview. Real-world cost depends on your cache hit rate and batch eligibility.
We mirror pricing from official provider pricing pages and docs. Each model row has a "last verified" timestamp and a link to the source so you can check yourself.
A nightly snapshot cron diffs against the previous day. When a change is detected we log it and email subscribed users within 24 hours.
Models that offer cached input pricing get a separate column. The volume calculator multiplies your cache hit rate by the cached price and the rest by the standard input price.
Providers that support async batch endpoints usually list a reduced price. If a model row has a batch price, you can set the "batch eligible" slider to model cost savings for that workload share.
Our top recommendation: pick two candidates from the filtered shortlist, estimate break-even with the switch-cost calculator, and run your real prompts through "Compare my prompt" for a grounded test. Top 3 this month: Arcee AI: Trinity Large Preview, Auto Router, Body Builder (beta).
Yes. The comparison, calculators, and public JSON API are free. Signing in with Google enables the "Compare my prompt", saved comparisons, and price alerts features.
We list the top open-weight models (Meta Llama, Mistral, DeepSeek) when a pay-per-token API exists. Self-host cost modeling is not included since it depends on your GPU inventory.
Yes — the /embed route renders a minimal iframe with attribution. Use the embed builder on the main page to generate the snippet.
After signing in you can subscribe to any model. When the nightly snapshot detects a price change or deprecation, you get an email within 24 hours.
Each task has a weighted score over benchmarks relevant to that task plus a price pillar. We publish the exact weights on the methodology page.
No. The ranking is not pay-to-play. Providers pay us nothing.

Una vez al mes

Recibe el LLM Market Pulse en tu bandeja.

Modelos nuevos, deprecaciones silenciosas, movimientos de precio — resumidos en un email breve cada mes. Sin spam, puedes darte de baja cuando quieras.

¿Desplegando a escala?

¿Necesitas ayuda eligiendo un modelo para tu caso de uso?

Una llamada de 30 minutos con un applied-AI lead de Buzzi. Miramos tu volumen, tus datos y tus restricciones y recomendamos una stack que realmente puedas desplegar.