Mistral: Mixtral 8x22B Instruct

Public pricingIntelligence 70/100Medium memoryटूल उपयोग

Mistral: Mixtral 8x22B Instruct एक टेक्स्ट मॉडल है, जिसे कोडिंग, डिबगिंग और तकनीकी कार्य के लिए बनाया गया है। यह मजबूत coding प्रदर्शन، कम latency और efficient inference, 66K tokens का context और संतुलित लागत profile जोड़कर coding, debugging, and technical writing में भरोसेमंद काम करता है। यह तब व्यावहारिक विकल्प है जब latency, cost और throughput महत्वपूर्ण हो,

Input

$2.00/1M

Output

$6.00/1M

Cached

$0.20/1M

Batch

$1.00/1M

Calculate your Mixtral 8x22B Instruct bill.

Set your workload — see cost at your exact volume.

What would Mixtral 8x22B Instruct cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mixtral 8x22B Instruct at a glance.

Memory

65,536

tokens

Max reply

4,096

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

mistral

Released

Apr 2024

Training cutoff

Jan 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • mmlu

    77.8
  • humaneval

    45.1
  • aa_intelligence_index

    10
  • ifeval

    71.84
  • bbh

    44.11
  • mmlu_pro

    38.7

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mixtral 8x22B Instruct

  • Agentic workflows that call tools or APIs.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Mixtral 8x22B Instruct — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mixtral 8x22B Instruct costs roughly $390 per month. Input is $2.00 /1M tokens and output is $6.00 /1M tokens.
Mixtral 8x22B Instruct has a 65,536-token context window (medium memory — a long report or a codebase file). That means you can fit about 12,288 words of input and history in a single call.
Beyond text generation, Mixtral 8x22B Instruct supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mixtral 8x22B Instruct was released in April 2024, with training data cut off around January 2024.
Models in a similar class include Pixtral Large 2411, Mixtral 8x7B Instruct, Mistral Large 3 2512. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mixtral 8x22B Instruct against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.