Z.ai: GLM 5V Turbo

Public pricingIntelligence 79/100Large memoryVisãoRaciocínio profundoUso de ferramentas

Z.ai: GLM 5V Turbo é um modelo multimodal criado para programação, engenharia de software e fluxos agênticos. Ele combina o tratamento multimodal de entradas e bom desempenho em programação, um contexto de 203K tokens e um perfil custo equilibrado para entregar trabalho confiável em programação, engenharia de software e fluxos agênticos.

Input

$1.20/1M

Output

$4.00/1M

Cached

$0.24/1M

Batch

$0.60/1M

Calculate your GLM 5V Turbo bill.

Set your workload — see cost at your exact volume.

What would GLM 5V Turbo cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

GLM 5V Turbo at a glance.

Memory

202,752

tokens

Max reply

131,072

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

Released

Apr 2026

Training cutoff

Dec 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    43

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick GLM 5V Turbo

  • Multi-step reasoning, research agents, or hard math.
  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

GLM 5V Turbo — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, GLM 5V Turbo costs roughly $250 per month. Input is $1.20 /1M tokens and output is $4.00 /1M tokens.
GLM 5V Turbo has a 202,752-token context window (large memory — an entire book or large codebase). That means you can fit about 38,016 words of input and history in a single call.
Beyond text generation, GLM 5V Turbo supports understanding images, deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
GLM 5V Turbo was released in April 2026, with training data cut off around December 2025.
Models in a similar class include Gemini 2.5 Pro, Gemini 2.5 Pro Preview 05-06, Gemini 2.5 Pro Preview 06-05. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare GLM 5V Turbo against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.