MiniMax: MiniMax M1

Public pricingIntelligence 76/100Huge memoryDiep redenerenToolgebruik

MiniMax: MiniMax M1 is een tekst-model voor redeneren en probleemoplossing. Het combineert diep redeneren en plannen en lage latency en efficiΓ«nte inferentie, een context van 1M+ tokens en een evenwichtig geprijsd-profiel voor betrouwbaar werk in reasoning, analysis, and hard problem solving.

Input

$0.40/1M

Output

$2.20/1M

Cached

$0.04/1M

Batch

$0.20/1M

Calculate your MiniMax M1 bill.

Set your workload β€” see cost at your exact volume.

What would MiniMax M1 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

MiniMax M1 at a glance.

Memory

1,000,000

tokens

Max reply

40,000

tokens

Memory tier

Huge

multiple books or whole repositories

Tokenizer

β€”

Released

Jun 2025

Training cutoff

Jun 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • swe_bench_verified

    56
  • livecodebench

    65

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick MiniMax M1

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images β€” pick a vision-capable model instead.

FAQ

MiniMax M1 β€” the questions we see most.

Pricing, capabilities, alternatives β€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, MiniMax M1 costs roughly $118 per month. Input is $0.40 /1M tokens and output is $2.20 /1M tokens.
MiniMax M1 has a 1,000,000-token context window (huge memory β€” multiple books or whole repositories). That means you can fit about 187,500 words of input and history in a single call.
Beyond text generation, MiniMax M1 supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
MiniMax M1 was released in June 2025, with training data cut off around June 2024.
Models in a similar class include MiMo-V2-Omni, Qwen3.5 397B A17B, GLM 4.6. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare MiniMax M1 against 100+ other models.

Open the full wizard β€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.