Google: Gemini 2.0 Flash

Public pricingIntelligence 71/100Huge memoryVisionePensiero profondoUso di strumenti

Google: Gemini 2.0 Flash è un modello audio pensato per comprensione visione-linguaggio. Unisce la gestione multimodale degli input e la comprensione delle immagini, un contesto di 1M+ tokens e un profilo a basso costo per un lavoro affidabile in audio understanding and multimodal input.

Input

$0.10/1M

Output

$0.40/1M

Cached

$0.03/1M

Batch

$0.07/1M

Calculate your Gemini 2.0 Flash bill.

Set your workload — see cost at your exact volume.

What would Gemini 2.0 Flash cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Gemini 2.0 Flash at a glance.

Memory

1,000,000

tokens

Max reply

8,192

tokens

Memory tier

Huge

multiple books or whole repositories

Tokenizer

sentencepiece

Released

Feb 2025

Training cutoff

Jun 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    19
  • chatbot_arena_elo

    1356
  • gpqa_diamond

    62
  • humanitys_last_exam

    6.56
  • mmlu_pro

    77
  • swe_bench_verified

    51

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Gemini 2.0 Flash

  • Multi-step reasoning, research agents, or hard math.
  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Gemini 2.0 Flash — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Gemini 2.0 Flash costs roughly $24 per month. Input is $0.10 /1M tokens and output is $0.40 /1M tokens.
Gemini 2.0 Flash has a 1,000,000-token context window (huge memory — multiple books or whole repositories). That means you can fit about 187,500 words of input and history in a single call.
Beyond text generation, Gemini 2.0 Flash supports understanding images, deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Gemini 2.0 Flash was released in February 2025, with training data cut off around June 2024.
Models in a similar class include Gemini 2.5 Flash Lite, Gemini 2.5 Flash Lite Preview 09-2025, Gemini 2.0 Flash Lite. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Gemini 2.0 Flash against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.