Mistral: Mistral 7B Instruct v0.1

Public pricingIntelligence 74/100Small memoryOutils

Mistral: Mistral 7B Instruct v0.1 est un modèle texte conçu pour conversation générale, analyse et usage en production. Il associe des performances générales stables, un contexte de 3K tokens et un profil à faible coût pour un travail fiable sur general chat, analysis, and production workloads.

Input

$0.11/1M

Output

$0.19/1M

Cached

$0.05/1M

Batch

$0.10/1M

Calculate your Mistral 7B Instruct v0.1 bill.

Set your workload — see cost at your exact volume.

What would Mistral 7B Instruct v0.1 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Mistral 7B Instruct v0.1 at a glance.

Memory

2,824

tokens

Max reply

4,096

tokens

Memory tier

Small

a few emails or a short document

Tokenizer

mistral

Released

Sep 2023

Training cutoff

Jul 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • mmlu

    60.1
  • ifeval

    44.87
  • bbh

    7.65
  • mmlu_pro

    15.72

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Mistral 7B Instruct v0.1

  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.
  • Your inputs routinely exceed short documents.

FAQ

Mistral 7B Instruct v0.1 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Mistral 7B Instruct v0.1 costs roughly $16 per month. Input is $0.11 /1M tokens and output is $0.19 /1M tokens.
Mistral 7B Instruct v0.1 has a 2,824-token context window (small memory — a few emails or a short document). That means you can fit about 530 words of input and history in a single call.
Beyond text generation, Mistral 7B Instruct v0.1 supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Mistral 7B Instruct v0.1 was released in September 2023, with training data cut off around July 2023.
Models in a similar class include Devstral Small 1.1, Ministral 3 3B 2512, Mistral Small Creative. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Mistral 7B Instruct v0.1 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.