Mistral: Mixtral 8x7B Instruct

Public pricingIntelligence 76/100Medium memory도ꡬ ν™œμš©

Mistral: Mixtral 8x7B Instruct은 일반 λŒ€ν™”, 뢄석, 운영 ν™˜κ²½μ— 맞좘 ν…μŠ€νŠΈ λͺ¨λΈμž…λ‹ˆλ‹€. μ•ˆμ •μ μΈ λ²”μš© μ„±λŠ₯, 33K tokens의 μ»¨ν…μŠ€νŠΈ, κ· ν˜•ν˜• λΉ„μš© νŠΉμ„±μ„ κ²°ν•©ν•΄ general chat, analysis, and production workloadsμ—μ„œ μ•ˆμ •μ μΈ μž‘μ—…μ„ λ•μŠ΅λ‹ˆλ‹€. 특히 ν’ˆμ§ˆ, 속도, λΉ„μš©κ°€ μ€‘μš”ν•œ κ²½μš°μ— 잘 맞으며, μ•ˆμ •μ μΈ 좜λ ₯, μœ μ—°ν•œ 배포, ν™•μž₯성을 μ€‘μ‹œν•˜λŠ” νŒ€μ— μ‹€μš©μ μž…λ‹ˆλ‹€. μ•ˆμ •μ μΈ 응닡, 넓은 λ¬Έλ§₯ 처리, 그리고 μ‹œμ œν’ˆλΆ€ν„° μš΄μ˜κΉŒμ§€ μ΄μ–΄μ§€λŠ” μœ μ—°μ„±μ΄ ν•„μš”ν•  λ•Œ μœ μš©ν•©λ‹ˆλ‹€. μ•ˆμ •μ μΈ 응닡, 넓은 λ¬Έλ§₯ 처리, 그리고 μ‹œμ œν’ˆλΆ€ν„° μš΄μ˜κΉŒμ§€ μ΄μ–΄μ§€λŠ” μœ μ—°μ„±μ΄ ν•„μš”ν•  λ•Œ μœ μš©ν•©λ‹ˆλ‹€.

Input

$0.54/1M

Output

$0.54/1M

Cached

$0.14/1M

Batch

$0.27/1M

Calculate your Mixtral 8x7B Instruct bill.

Set your workload β€” see cost at your exact volume.

What would Mixtral 8x7B Instruct cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mixtral 8x7B Instruct at a glance.

Memory

32,768

tokens

Max reply

16,384

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

mistral

Released

Dec 2023

Training cutoff

Oct 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • mmlu

    70.6
  • aa_intelligence_index

    8
  • ifeval

    55.99
  • bbh

    29.74
  • mmlu_pro

    29.91

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mixtral 8x7B Instruct

  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images β€” pick a vision-capable model instead.

FAQ

Mixtral 8x7B Instruct β€” the questions we see most.

Pricing, capabilities, alternatives β€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mixtral 8x7B Instruct costs roughly $62 per month. Input is $0.54 /1M tokens and output is $0.54 /1M tokens.
Mixtral 8x7B Instruct has a 32,768-token context window (medium memory β€” a long report or a codebase file). That means you can fit about 6,144 words of input and history in a single call.
Beyond text generation, Mixtral 8x7B Instruct supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mixtral 8x7B Instruct was released in December 2023, with training data cut off around October 2023.
Models in a similar class include Mistral Large 3 2512, Devstral 2 2512, Devstral Medium. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mixtral 8x7B Instruct against 100+ other models.

Open the full wizard β€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.