MiniMax: MiniMax M2

Public pricingIntelligence 78/100Medium memoryRéflexion approfondieOutils

MiniMax: MiniMax M2 est un modèle texte conçu pour codage, génie logiciel et workflows agentiques. Il associe de bonnes performances en codage et une utilisation fiable des outils et des agents, un contexte de 197K tokens et un profil à faible coût pour un travail fiable sur codage, génie logiciel et workflows agentiques.

Input

$0.26/1M

Output

$1.00/1M

Cached

$0.03/1M

Batch

$0.15/1M

Calculate your MiniMax M2 bill.

Set your workload — see cost at your exact volume.

What would MiniMax M2 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

MiniMax M2 at a glance.

Memory

196,608

tokens

Max reply

196,608

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

Released

Oct 2025

Training cutoff

Jun 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aime_2025

    78
  • gpqa_diamond

    78
  • ifeval

    72
  • livecodebench

    83
  • mmlu_pro

    82
  • swe_bench_verified

    69.4

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick MiniMax M2

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

MiniMax M2 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, MiniMax M2 costs roughly $59 per month. Input is $0.26 /1M tokens and output is $1.00 /1M tokens.
MiniMax M2 has a 196,608-token context window (medium memory — a long report or a codebase file). That means you can fit about 36,864 words of input and history in a single call.
Beyond text generation, MiniMax M2 supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
MiniMax M2 was released in October 2025, with training data cut off around June 2025.
Models in a similar class include MiniMax M2.1, MiniMax M2-her, MiniMax M2.7. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare MiniMax M2 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.