Google: Gemma 4 31B

Public pricingIntelligence 80/100Large memoryVisionTool-Nutzung

Google: Gemma 4 31B ist ein multimodal-Modell für multimodales Denken und Analysieren. Es verbindet multimodale Eingabeverarbeitung und tiefes Schlussfolgern und Planen, einen Kontext von 262K tokens und ein kostengünstig-Profil für zuverlässige Arbeit über multimodales Denken und Analysieren.

Input

$0.13/1M

Output

$0.38/1M

Cached

$0.02/1M

Batch

$0.07/1M

Calculate your Gemma 4 31B bill.

Set your workload — see cost at your exact volume.

What would Gemma 4 31B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Gemma 4 31B at a glance.

Memory

262,144

tokens

Max reply

32,768

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

sentencepiece

Released

Apr 2026

Training cutoff

Oct 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    39
  • chatbot_arena_elo

    1452
  • livecodebench

    80
  • mmlu_pro

    85.2

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Gemma 4 31B

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Gemma 4 31B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Gemma 4 31B costs roughly $25 per month. Input is $0.13 /1M tokens and output is $0.38 /1M tokens.
Gemma 4 31B has a 262,144-token context window (large memory — an entire book or large codebase). That means you can fit about 49,152 words of input and history in a single call.
Beyond text generation, Gemma 4 31B supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Gemma 4 31B was released in April 2026, with training data cut off around October 2025.
Models in a similar class include Gemini 2.0 Flash, Gemini 2.5 Flash Lite, Gemini 2.5 Flash Lite Preview 09-2025. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Gemma 4 31B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.