MiniMax: MiniMax M1

Public pricingIntelligence 76/100Huge memoryगहरी सोचटूल उपयोग

MiniMax: MiniMax M1 एक टेक्स्ट मॉडल है, जिसे तर्क और समस्या समाधान के लिए बनाया गया है। यह गहरी तर्कशक्ति और योजना، कम latency और efficient inference, 1M+ tokens का context और संतुलित लागत profile जोड़कर reasoning, analysis, and hard problem solving में भरोसेमंद काम करता है। यह तब व्यावहारिक विकल्प है जब latency, cost और throughput महत्वपूर्ण हो,

Input

$0.40/1M

Output

$2.20/1M

Cached

$0.04/1M

Batch

$0.20/1M

Calculate your MiniMax M1 bill.

Set your workload — see cost at your exact volume.

What would MiniMax M1 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

MiniMax M1 at a glance.

Memory

1,000,000

tokens

Max reply

40,000

tokens

Memory tier

Huge

multiple books or whole repositories

Tokenizer

Released

Jun 2025

Training cutoff

Jun 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • swe_bench_verified

    56
  • livecodebench

    65

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick MiniMax M1

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

MiniMax M1 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, MiniMax M1 costs roughly $118 per month. Input is $0.40 /1M tokens and output is $2.20 /1M tokens.
MiniMax M1 has a 1,000,000-token context window (huge memory — multiple books or whole repositories). That means you can fit about 187,500 words of input and history in a single call.
Beyond text generation, MiniMax M1 supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
MiniMax M1 was released in June 2025, with training data cut off around June 2024.
Models in a similar class include MiMo-V2-Omni, Qwen3.5 397B A17B, GLM 4.6. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare MiniMax M1 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.