Mistral: Codestral 2508

Public pricingIntelligence 79/100Large memoryРабота с инструментами

Mistral: Codestral 2508 — это текстовой‑модель для кодирование, отладка и технические задачи. Она сочетает высокую эффективность в кодинге и низкую задержку и эффективную инференцию, контекст 256K tokens и профиль недорогой, обеспечивая надёжную работу в задачах coding, debugging, and technical writing.

Input

$0.30/1M

Output

$0.90/1M

Cached

$0.03/1M

Batch

$0.15/1M

Calculate your Codestral 2508 bill.

Set your workload — see cost at your exact volume.

What would Codestral 2508 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Codestral 2508 at a glance.

Memory

256,000

tokens

Max reply

32,768

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

mistral

Released

Aug 2025

Training cutoff

Mar 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • humaneval

    81

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Codestral 2508

  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.
  • Code generation, review, or refactoring.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Codestral 2508 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Codestral 2508 costs roughly $59 per month. Input is $0.30 /1M tokens and output is $0.90 /1M tokens.
Codestral 2508 has a 256,000-token context window (large memory — an entire book or large codebase). That means you can fit about 48,000 words of input and history in a single call.
Beyond text generation, Codestral 2508 supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Codestral 2508 was released in August 2025, with training data cut off around March 2025.
Models in a similar class include Gemini 2.5 Flash, Claude 3 Haiku, Gemini 3.1 Flash Lite Preview. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Codestral 2508 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.