Xiaomi: MiMo-V2-Flash

Public pricingIntelligence 81/100Large memoryDiep redenerenToolgebruik

Xiaomi: MiMo-V2-Flash is een tekst-model voor algemene chat, analyse en productiegebruik. Het combineert lage latency en efficiΓ«nte inferentie en open-source flexibiliteit, een context van 262K tokens en een laag geprijsd-profiel voor betrouwbaar werk in general chat, analysis, and production workloads.

Input

$0.09/1M

Output

$0.29/1M

Cached

$0.04/1M

Batch

$0.04/1M

Calculate your MiMo-V2-Flash bill.

Set your workload β€” see cost at your exact volume.

What would MiMo-V2-Flash cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

MiMo-V2-Flash at a glance.

Memory

262,144

tokens

Max reply

65,536

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

β€”

Released

Mar 2026

Training cutoff

Dec 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aime_2025

    94.1
  • bbh

    88.5
  • drop

    84.7
  • gpqa_diamond

    83.7
  • hellaswag

    88.5
  • humaneval

    70.7
  • livecodebench

    80.6
  • math

    71
  • mmlu

    86.7
  • mmlu_pro

    84.9
  • swe_bench_verified

    73.4

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick MiMo-V2-Flash

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images β€” pick a vision-capable model instead.

FAQ

MiMo-V2-Flash β€” the questions we see most.

Pricing, capabilities, alternatives β€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, MiMo-V2-Flash costs roughly $18 per month. Input is $0.09 /1M tokens and output is $0.29 /1M tokens.
MiMo-V2-Flash has a 262,144-token context window (large memory β€” an entire book or large codebase). That means you can fit about 49,152 words of input and history in a single call.
Beyond text generation, MiMo-V2-Flash supports deep step-by-step reasoning, calling functions / tools, strict JSON output. It streams replies by default.
MiMo-V2-Flash was released in March 2026, with training data cut off around December 2025.
Models in a similar class include MiMo-V2-Omni, MiMo-V2-Pro, Nemotron 3 Super. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare MiMo-V2-Flash against 100+ other models.

Open the full wizard β€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.