Z.ai: GLM 4.7 Flash

Public pricingIntelligence 77/100Large memory深い思考ツール利用

Z.ai: GLM 4.7 Flash は テキスト モデルで、コーディング、ソフトウェア開発、エージェント業務 に向いています。高いコーディング性能、安定したツール利用とエージェント動作、203K tokensのコンテキスト、低コストの特性を組み合わせ、コーディング、ソフトウェア開発、エージェント業務 で安定した動作を支えます。遅延・コスト・スループット が重要な場面に合っており、安定した出力、柔軟な導入、拡張性を求めるチームに実用的です。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。

Input

$0.06/1M

Output

$0.40/1M

Cached

$0.01/1M

Batch

$0.04/1M

Calculate your GLM 4.7 Flash bill.

Set your workload — see cost at your exact volume.

What would GLM 4.7 Flash cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

GLM 4.7 Flash at a glance.

Memory

202,752

tokens

Max reply

131,072

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

Released

Dec 2025

Training cutoff

Aug 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aime_2025

    91.6
  • gpqa_diamond

    75.2
  • swe_bench_verified

    59.2

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick GLM 4.7 Flash

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

GLM 4.7 Flash — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, GLM 4.7 Flash costs roughly $21 per month. Input is $0.06 /1M tokens and output is $0.40 /1M tokens.
GLM 4.7 Flash has a 202,752-token context window (large memory — an entire book or large codebase). That means you can fit about 38,016 words of input and history in a single call.
Beyond text generation, GLM 4.7 Flash supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
GLM 4.7 Flash was released in December 2025, with training data cut off around August 2025.
Models in a similar class include GLM 4.7, GLM 4.6, GPT-5 Nano. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare GLM 4.7 Flash against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.