Liquid AI Models API Cost Calculator & Comparison

Every Liquid AI model, side by side — current API rates, context window, benchmarks, and a live calculator that ranks them at your exact workload. 3 active models, 3 with public pricing. Prices refreshed daily.

Models tracked

3

Active

3

With public pricing

3

Cheapest input

$0.00/1M

Calculate your Liquid AI API cost at your workload.

Set your workload — every priced model ranks in real time.

Adjust the workload

Every model below updates in real time.

1,00010,00050,000250,0001M10M

Ranked by your monthly bill

No models with public pricing available to compare right now.

Pricing at a glance

Blended $/1M tokens across the lineup.

Blended price uses a 3-to-1 input/output ratio. Green bar = cheapest.

Quick picks

Best Liquid AI model for your use case.

As of April 2026, Liquid AI offers 3 active models via API, ranging from $0/1M to $0.030/1M input tokens. The most context-rich model handles up to 33K tokens. Models support deep reasoning. All prices are USD per 1 million tokens.

Quality vs price

Liquid AI benchmarks at a glance.

No benchmark data yet for Liquid AI.

Open weights

Open Models from Liquid AI

Liquid AI ships 3 open-source or open-weights models you can self-host or fine-tune. Each links to its Hugging Face card.

Every model

Every Liquid AI model — pricing, context & capabilities.

ModelContextInput /1MOutput /1M
LFM2.5-1.2B-Instruct33K$0.0$0.0
LFM2.5-1.2B-Thinking33K$0.0$0.0
LFM2-24B-A2B33K$0.03$0.12

FAQ

Häufig gestellte Fragen

Pricing patterns, best-known use cases, and how this provider stacks up.

Get instant answers from our AI agent

Liquid AI API pricing ranges from $0 to $0.030 per 1M input tokens. Output tokens cost more than input on every model. Prices are per 1 million tokens (1M ≈ 750,000 words). Use the calculator above to estimate your monthly spend at your actual workload.
LFM2.5-1.2B-Instruct is the lowest-priced Liquid AI model with public pricing at $0/1M input tokens. It suits high-volume tasks where cost matters most — classification, extraction, summarization, and similar workloads that don't need frontier reasoning.
LFM2-24B-A2B is Liquid AI's highest-tier model at $0.030/1M input. It delivers the most sophisticated reasoning, instruction-following, and nuance. For workloads that don't require frontier performance, a mid-tier model typically cuts inference costs substantially.
LFM2.5-1.2B-Thinking support deep reasoning mode, which improves performance on multi-step coding, debugging, and code review. For simpler autocomplete or snippet generation, a faster, cheaper model often delivers acceptable quality at a fraction of the cost.
Yes — Liquid AI supports prompt caching (discounts for repeated context) and batch processing (accept a delay, cut costs ~50%). These rates appear in the table above under "Cached /1M" and "Batch /1M." Caching pays off quickly if your prompts share a long system prompt or document prefix across many calls.
Liquid AI has historically adjusted prices when launching new model generations, often cutting rates to stay competitive. Buzzi.ai snapshots pricing daily — you can subscribe to price-drop alerts on any Liquid AI model using the "Alert me" button on its detail page.
Use the main comparison wizard to run the same calculator across Liquid AI, Anthropic, Google, Meta, Mistral, and 20+ other providers. Set your exact workload and get a ranked cost chart in under a minute.
LFM2.5-1.2B-Thinking offer an extended thinking or reasoning mode. The model spends extra compute "thinking" before answering — slower and more expensive, but meaningfully better on complex, multi-step problems. Standard mode is faster and cheaper for routine tasks.

Look wider

Compare Liquid AI against other providers.

Open the full wizard — pick a use case, set your usage, and cross-compare against OpenAI, Anthropic, Google, and 20+ more.