Liquid AI: LFM2-24B-A2B

Public pricingIntelligence 79/100Medium memory

LiquidAI: LFM2-24B-A2B est un modèle texte conçu pour conversation générale, analyse et usage en production. Il associe une faible latence et une inférence efficace, un contexte de 33K tokens et un profil à faible coût pour un travail fiable sur general chat, analysis, and production workloads.

Input

$0.03/1M

Output

$0.12/1M

Cached

$0.01/1M

Batch

$0.01/1M

Calculate your LFM2-24B-A2B bill.

Set your workload — see cost at your exact volume.

What would LFM2-24B-A2B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

LFM2-24B-A2B at a glance.

Memory

32,768

tokens

Max reply

8,192

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

Released

Mar 2026

Training cutoff

Oct 2025

Availability

Public pricing

Status

active

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick LFM2-24B-A2B

  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.
  • You need tool-use / function calling for agent workflows.

FAQ

LFM2-24B-A2B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, LFM2-24B-A2B costs roughly $7 per month. Input is $0.03 /1M tokens and output is $0.12 /1M tokens.
LFM2-24B-A2B has a 32,768-token context window (medium memory — a long report or a codebase file). That means you can fit about 6,144 words of input and history in a single call.
Beyond text generation, LFM2-24B-A2B supports fine-tuning on your own data. It streams replies by default.
LFM2-24B-A2B was released in March 2026, with training data cut off around October 2025.
Models in a similar class include LFM2.5-1.2B-Instruct, LFM2.5-1.2B-Thinking, Llama 3 8B Instruct. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare LFM2-24B-A2B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.