Mistral: Pixtral Large 2411

Public pricingIntelligence 79/100Medium memoryविज़नटूल उपयोग

Mistral: Pixtral Large 2411 एक मल्टीमॉडल मॉडल है, जिसे vision-language समझ के लिए बनाया गया है। यह multimodal input handling، छवि समझ, 131K tokens का context और संतुलित लागत profile जोड़कर vision-language understanding and content analysis में भरोसेमंद काम करता है। यह तब व्यावहारिक विकल्प है जब गुणवत्ता, गति और लागत महत्वपूर्ण हो, खासकर उन टीमों के लिए जिन्हें स्थिर output,

Input

$2.00/1M

Output

$6.00/1M

Cached

$0.20/1M

Batch

$1.00/1M

Calculate your Pixtral Large 2411 bill.

Set your workload — see cost at your exact volume.

What would Pixtral Large 2411 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Pixtral Large 2411 at a glance.

Memory

131,072

tokens

Max reply

4,096

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

mistral

Released

Nov 2024

Training cutoff

Jul 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    14

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Pixtral Large 2411

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Multimodal pipelines mixing text + images.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Pixtral Large 2411 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Pixtral Large 2411 costs roughly $390 per month. Input is $2.00 /1M tokens and output is $6.00 /1M tokens.
Pixtral Large 2411 has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, Pixtral Large 2411 supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Pixtral Large 2411 was released in November 2024, with training data cut off around July 2024.
Models in a similar class include Mistral Large 3 2512, Mistral Medium 3, Mistral Medium 3.1. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Pixtral Large 2411 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.