Qwen: QwQ 32B

Public pricingIntelligence 58/100Medium memory深い思考ツール利用

Qwen: QwQ 32B は テキスト モデルで、推論と問題解決 に向いています。深い推論と計画、131K tokensのコンテキスト、低コストの特性を組み合わせ、reasoning, analysis, and hard problem solving で安定した動作を支えます。品質・速度・コスト が重要な場面に合っており、安定した出力、柔軟な導入、拡張性を求めるチームに実用的です。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。 安定した応答、長い文脈処理、そして試作から本番まで広く使える柔軟さが必要な場面で役立ちます。

Input

$0.15/1M

Output

$0.58/1M

Cached

$0.01/1M

Batch

$0.07/1M

Calculate your QwQ 32B bill.

Set your workload — see cost at your exact volume.

What would QwQ 32B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

QwQ 32B at a glance.

Memory

131,072

tokens

Max reply

131,072

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

qwen

Released

Mar 2025

Training cutoff

Oct 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aime_2024

    79.5
  • ifeval

    83.9
  • livecodebench

    63.4
  • bbh

    2.87
  • mmlu_pro

    2.18

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick QwQ 32B

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

QwQ 32B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, QwQ 32B costs roughly $34 per month. Input is $0.15 /1M tokens and output is $0.58 /1M tokens.
QwQ 32B has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, QwQ 32B supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
QwQ 32B was released in March 2025, with training data cut off around October 2024.
Models in a similar class include Qwen3 Coder Next, Qwen3.5-35B-A3B, Qwen3 235B A22B Thinking 2507. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare QwQ 32B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.