Mistral: Mistral Large 3 2512

Public pricingIntelligence 79/100Large memoryЗрениеРабота с инструментами

Mistral: Mistral Large 3 2512 — это мультимодальной‑модель для понимание «зрение-язык». Она сочетает обработку мультимодальных входов и понимание изображений, контекст 262K tokens и профиль сбалансированный по цене, обеспечивая надёжную работу в задачах vision-language understanding and content analysis.

Input

$0.50/1M

Output

$1.50/1M

Cached

$0.05/1M

Batch

$0.25/1M

Calculate your Mistral Large 3 2512 bill.

Set your workload — see cost at your exact volume.

What would Mistral Large 3 2512 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mistral Large 3 2512 at a glance.

Memory

262,144

tokens

Max reply

32,768

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

mistral

Released

Dec 2025

Training cutoff

Jun 2025

Availability

Public pricing

Status

active

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mistral Large 3 2512

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Mistral Large 3 2512 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mistral Large 3 2512 costs roughly $98 per month. Input is $0.50 /1M tokens and output is $1.50 /1M tokens.
Mistral Large 3 2512 has a 262,144-token context window (large memory — an entire book or large codebase). That means you can fit about 49,152 words of input and history in a single call.
Beyond text generation, Mistral Large 3 2512 supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mistral Large 3 2512 was released in December 2025, with training data cut off around June 2025.
Models in a similar class include Ministral 3 14B 2512, Ministral 3 8B 2512, Mistral Small 4. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mistral Large 3 2512 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.