Liquid AI: LFM2-24B-A2B

Public pricingIntelligence 79/100Medium memory

LiquidAI: LFM2-24B-A2B์€ ์ผ๋ฐ˜ ๋Œ€ํ™”, ๋ถ„์„, ์šด์˜ ํ™˜๊ฒฝ์— ๋งž์ถ˜ ํ…์ŠคํŠธ ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค. ๋‚ฎ์€ ์ง€์—ฐ๊ณผ ํšจ์œจ์  ์ถ”๋ก , 33K tokens์˜ ์ปจํ…์ŠคํŠธ, ์ €๋น„์šฉ ํŠน์„ฑ์„ ๊ฒฐํ•ฉํ•ด general chat, analysis, and production workloads์—์„œ ์•ˆ์ •์ ์ธ ์ž‘์—…์„ ๋•์Šต๋‹ˆ๋‹ค. ํŠนํžˆ ์ง€์—ฐ, ๋น„์šฉ, ์ฒ˜๋ฆฌ๋Ÿ‰๊ฐ€ ์ค‘์š”ํ•œ ๊ฒฝ์šฐ์— ์ž˜ ๋งž์œผ๋ฉฐ, ์•ˆ์ •์ ์ธ ์ถœ๋ ฅ, ์œ ์—ฐํ•œ ๋ฐฐํฌ, ํ™•์žฅ์„ฑ์„ ์ค‘์‹œํ•˜๋Š” ํŒ€์— ์‹ค์šฉ์ ์ž…๋‹ˆ๋‹ค. ์•ˆ์ •์ ์ธ ์‘๋‹ต, ๋„“์€ ๋ฌธ๋งฅ ์ฒ˜๋ฆฌ, ๊ทธ๋ฆฌ๊ณ  ์‹œ์ œํ’ˆ๋ถ€ํ„ฐ ์šด์˜๊นŒ์ง€ ์ด์–ด์ง€๋Š” ์œ ์—ฐ์„ฑ์ด ํ•„์š”ํ•  ๋•Œ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค. ์•ˆ์ •์ ์ธ ์‘๋‹ต, ๋„“์€ ๋ฌธ๋งฅ ์ฒ˜๋ฆฌ, ๊ทธ๋ฆฌ๊ณ  ์‹œ์ œํ’ˆ๋ถ€ํ„ฐ ์šด์˜๊นŒ์ง€ ์ด์–ด์ง€๋Š” ์œ ์—ฐ์„ฑ์ด ํ•„์š”ํ•  ๋•Œ ์œ ์šฉํ•ฉ๋‹ˆ๋‹ค.

Input

$0.03/1M

Output

$0.12/1M

Cached

$0.01/1M

Batch

$0.01/1M

Calculate your LFM2-24B-A2B bill.

Set your workload โ€” see cost at your exact volume.

What would LFM2-24B-A2B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

LFM2-24B-A2B at a glance.

Memory

32,768

tokens

Max reply

8,192

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

โ€”

Released

Mar 2026

Training cutoff

Oct 2025

Availability

Public pricing

Status

active

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick LFM2-24B-A2B

  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images โ€” pick a vision-capable model instead.
  • You need tool-use / function calling for agent workflows.

FAQ

LFM2-24B-A2B โ€” the questions we see most.

Pricing, capabilities, alternatives โ€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, LFM2-24B-A2B costs roughly $7 per month. Input is $0.03 /1M tokens and output is $0.12 /1M tokens.
LFM2-24B-A2B has a 32,768-token context window (medium memory โ€” a long report or a codebase file). That means you can fit about 6,144 words of input and history in a single call.
Beyond text generation, LFM2-24B-A2B supports fine-tuning on your own data. It streams replies by default.
LFM2-24B-A2B was released in March 2026, with training data cut off around October 2025.
Models in a similar class include LFM2.5-1.2B-Instruct, LFM2.5-1.2B-Thinking, Llama 3 8B Instruct. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare LFM2-24B-A2B against 100+ other models.

Open the full wizard โ€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.