Mistral: Mistral Nemo

Public pricingIntelligence 65/100Medium memoryاستخدام الأدوات

Mistral: Mistral Nemo هو نموذج نصي مخصص لـالدردشة العامة والتحليل والاستخدام الإنتاجي. يجمع بين أداء عام ثابت وسياق بحجم 131K tokens وملف منخفض التكلفة لتقديم عمل موثوق في general chat, analysis, and production workloads. وهو خيار عملي عندما تكون الجودة والسرعة والتكلفة مهمة، خصوصًا للفرق التي تحتاج إلى مخرجات ثابتة ونشر مرن ومساحة للتوسع.

Input

$0.02/1M

Output

$0.04/1M

Cached

$0.00/1M

Batch

$0.01/1M

Calculate your Mistral Nemo bill.

Set your workload — see cost at your exact volume.

What would Mistral Nemo cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mistral Nemo at a glance.

Memory

131,072

tokens

Max reply

16,384

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

mistral

Released

Jul 2024

Training cutoff

Apr 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • ifeval

    63.8
  • bbh

    29.68
  • mmlu_pro

    27.97

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mistral Nemo

  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Mistral Nemo — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mistral Nemo costs roughly $3 per month. Input is $0.02 /1M tokens and output is $0.04 /1M tokens.
Mistral Nemo has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, Mistral Nemo supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mistral Nemo was released in July 2024, with training data cut off around April 2024.
Models in a similar class include Mistral Small 3, Mistral Small 3.2 24B, Devstral Small 1.1. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mistral Nemo against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.