Z.ai: GLM 5 Turbo

Public pricingIntelligence 81/100Large memory深度思考工具调用

Z.ai: GLM 5 Turbo 是一款文本模型,适合智能体工作流与工具调用。它结合了可靠的工具调用与智能体表现、低延迟与高效推理、203K tokens上下文和均衡成本定位,可在agent workflows, tool use, and orchestration中提供可靠表现。它适合重视延迟、成本与吞吐的团队,能带来稳定输出、灵活部署与扩展空间。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。

Input

$1.20/1M

Output

$4.00/1M

Cached

$0.24/1M

Batch

$0.60/1M

Calculate your GLM 5 Turbo bill.

Set your workload — see cost at your exact volume.

What would GLM 5 Turbo cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

GLM 5 Turbo at a glance.

Memory

202,752

tokens

Max reply

131,072

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

Released

Mar 2026

Training cutoff

Oct 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • frontiermath_tier_4

    2.1
  • gpqa_diamond

    87.82
  • swe_bench_verified

    72.08

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick GLM 5 Turbo

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

GLM 5 Turbo — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, GLM 5 Turbo costs roughly $250 per month. Input is $1.20 /1M tokens and output is $4.00 /1M tokens.
GLM 5 Turbo has a 202,752-token context window (large memory — an entire book or large codebase). That means you can fit about 38,016 words of input and history in a single call.
Beyond text generation, GLM 5 Turbo supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
GLM 5 Turbo was released in March 2026, with training data cut off around October 2025.
Models in a similar class include GLM 5V Turbo, GLM 5.1, GLM 4.6. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare GLM 5 Turbo against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.