Mistral: Mistral Small 3.2 24B

Public pricingIntelligence 79/100Medium memoryالرؤيةاستخدام الأدوات

Mistral: Mistral Small 3.2 24B هو نموذج متعدد الوسائط مخصص لـفهم الرؤية واللغة. يجمع بين معالجة الإدخال متعدد الوسائط، فهم الصور وسياق بحجم 128K tokens وملف منخفض التكلفة لتقديم عمل موثوق في vision-language understanding and content analysis. وهو خيار عملي عندما تكون زمن الاستجابة والتكلفة والإنتاجية مهمة، خصوصًا للفرق التي تحتاج إلى مخرجات ثابتة ونشر مرن ومساحة للتوسع.

Input

$0.07/1M

Output

$0.20/1M

Cached

$0.02/1M

Batch

$0.04/1M

Calculate your Mistral Small 3.2 24B bill.

Set your workload — see cost at your exact volume.

What would Mistral Small 3.2 24B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mistral Small 3.2 24B at a glance.

Memory

128,000

tokens

Max reply

32,768

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

mistral

Released

Jun 2025

Training cutoff

Oct 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • mmlu

    81

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mistral Small 3.2 24B

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.
  • Multimodal pipelines mixing text + images.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Mistral Small 3.2 24B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mistral Small 3.2 24B costs roughly $14 per month. Input is $0.07 /1M tokens and output is $0.20 /1M tokens.
Mistral Small 3.2 24B has a 128,000-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,000 words of input and history in a single call.
Beyond text generation, Mistral Small 3.2 24B supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mistral Small 3.2 24B was released in June 2025, with training data cut off around October 2024.
Models in a similar class include Ministral 3 3B 2512, Voxtral Small 24B 2507, Ministral 3 8B 2512. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mistral Small 3.2 24B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.