Google: Gemma 3 12B

Public pricingIntelligence 80/100Medium memoryVisionOutils

Google: Gemma 3 12B est un modèle multimodal conçu pour raisonnement et analyse multimodaux. Il associe la gestion multimodale des entrées et un raisonnement et une planification poussés, un contexte de 131K tokens et un profil à faible coût pour un travail fiable sur raisonnement et analyse multimodaux.

Input

$0.04/1M

Output

$0.13/1M

Cached

$0.01/1M

Batch

$0.02/1M

Calculate your Gemma 3 12B bill.

Set your workload — see cost at your exact volume.

What would Gemma 3 12B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Gemma 3 12B at a glance.

Memory

131,072

tokens

Max reply

8,192

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

sentencepiece

Released

Mar 2025

Training cutoff

Aug 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • bbh

    85.7
  • humaneval

    85.4
  • ifeval

    88.9

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Gemma 3 12B

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.
  • Multimodal pipelines mixing text + images.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Gemma 3 12B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Gemma 3 12B costs roughly $8 per month. Input is $0.04 /1M tokens and output is $0.13 /1M tokens.
Gemma 3 12B has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, Gemma 3 12B supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Gemma 3 12B was released in March 2025, with training data cut off around August 2024.
Models in a similar class include Gemma 3 4B, Gemma 3n 4B, Gemini 2.0 Flash Lite. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Gemma 3 12B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.