Liquid AI: LFM2-24B-A2B
LiquidAI: LFM2-24B-A2B is a text model for general chat, analysis, and production use. It combines low latency and efficient inference with a 33K tokens context window and a low-cost profile. Use it for general chat, analysis, and production workloads when latency, cost, and throughput matters.
Input
$0.03/1M
Output
$0.12/1M
Cached
$0.01/1M
Batch
$0.01/1M