Z.ai: GLM 5V Turbo

Public pricingIntelligence 79/100Large memoryVisieDiep redenerenToolgebruik

Z.ai: GLM 5V Turbo is een multimodaal-model voor coderen, software engineering en agent-workflows. Het combineert multimodale invoerverwerking en sterke codeprestaties, een context van 203K tokens en een evenwichtig geprijsd-profiel voor betrouwbaar werk in coderen, software engineering en agent-workflows.

Input

$1.20/1M

Output

$4.00/1M

Cached

$0.24/1M

Batch

$0.60/1M

Calculate your GLM 5V Turbo bill.

Set your workload β€” see cost at your exact volume.

What would GLM 5V Turbo cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

GLM 5V Turbo at a glance.

Memory

202,752

tokens

Max reply

131,072

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

β€”

Released

Apr 2026

Training cutoff

Dec 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    43

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick GLM 5V Turbo

  • Multi-step reasoning, research agents, or hard math.
  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

GLM 5V Turbo β€” the questions we see most.

Pricing, capabilities, alternatives β€” generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, GLM 5V Turbo costs roughly $250 per month. Input is $1.20 /1M tokens and output is $4.00 /1M tokens.
GLM 5V Turbo has a 202,752-token context window (large memory β€” an entire book or large codebase). That means you can fit about 38,016 words of input and history in a single call.
Beyond text generation, GLM 5V Turbo supports understanding images, deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
GLM 5V Turbo was released in April 2026, with training data cut off around December 2025.
Models in a similar class include Gemini 2.5 Pro, Gemini 2.5 Pro Preview 05-06, Gemini 2.5 Pro Preview 06-05. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare GLM 5V Turbo against 100+ other models.

Open the full wizard β€” pick a use case, set your usage, and see side-by-side monthly costs in under a minute.