Z.ai: GLM 4.6

Public pricingIntelligence 78/100Large memoryDiep redenerenToolgebruik

Z.ai: GLM 4.6 is een tekst-model voor algemene chat, analyse en productiegebruik. Het combineert stabiele algemene prestaties, een context van 205K tokens en een evenwichtig geprijsd-profiel voor betrouwbaar werk in general chat, analysis, and production workloads.

Input

$0.39/1M

Output

$1.90/1M

Cached

$0.11/1M

Batch

$0.30/1M

Calculate your GLM 4.6 bill.

Set your workload — see cost at your exact volume.

What would GLM 4.6 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

GLM 4.6 at a glance.

Memory

204,800

tokens

Max reply

204,800

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

—

Released

Sep 2025

Training cutoff

Jun 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    30
  • frontiermath_tier_4

    2.13
  • livecodebench

    82.8
  • swe_bench_verified

    68

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick GLM 4.6

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

GLM 4.6 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, GLM 4.6 costs roughly $105 per month. Input is $0.39 /1M tokens and output is $1.90 /1M tokens.
GLM 4.6 has a 204,800-token context window (large memory — an entire book or large codebase). That means you can fit about 38,400 words of input and history in a single call.
Beyond text generation, GLM 4.6 supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
GLM 4.6 was released in September 2025, with training data cut off around June 2025.
Models in a similar class include GLM 4.7, GLM 4.7 Flash, Kimi K2.5. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare GLM 4.6 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.