Mistral: Mixtral 8x22B Instruct

Public pricingIntelligence 70/100Medium memoryРабота с инструментами

Mistral: Mixtral 8x22B Instruct — это текстовой‑модель для кодирование, отладка и технические задачи. Она сочетает высокую эффективность в кодинге и низкую задержку и эффективную инференцию, контекст 66K tokens и профиль сбалансированный по цене, обеспечивая надёжную работу в задачах coding, debugging, and technical writing.

Input

$2.00/1M

Output

$6.00/1M

Cached

$0.20/1M

Batch

$1.00/1M

Calculate your Mixtral 8x22B Instruct bill.

Set your workload — see cost at your exact volume.

What would Mixtral 8x22B Instruct cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mixtral 8x22B Instruct at a glance.

Memory

65,536

tokens

Max reply

4,096

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

mistral

Released

Apr 2024

Training cutoff

Jan 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    10
  • bbh

    44.11
  • humaneval

    45.1
  • ifeval

    71.84
  • mmlu

    77.8
  • mmlu_pro

    38.7

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mixtral 8x22B Instruct

  • Agentic workflows that call tools or APIs.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Mixtral 8x22B Instruct — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mixtral 8x22B Instruct costs roughly $390 per month. Input is $2.00 /1M tokens and output is $6.00 /1M tokens.
Mixtral 8x22B Instruct has a 65,536-token context window (medium memory — a long report or a codebase file). That means you can fit about 12,288 words of input and history in a single call.
Beyond text generation, Mixtral 8x22B Instruct supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mixtral 8x22B Instruct was released in April 2024, with training data cut off around January 2024.
Models in a similar class include Pixtral Large 2411, Mixtral 8x7B Instruct, Mistral Large 3 2512. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mixtral 8x22B Instruct against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.