Qwen: QwQ 32B

Public pricingIntelligence 58/100Medium memoryगहरी सोचटूल उपयोग

Qwen: QwQ 32B एक टेक्स्ट मॉडल है, जिसे तर्क और समस्या समाधान के लिए बनाया गया है। यह गहरी तर्कशक्ति और योजना, 131K tokens का context और कम लागत profile जोड़कर reasoning, analysis, and hard problem solving में भरोसेमंद काम करता है। यह तब व्यावहारिक विकल्प है जब गुणवत्ता, गति और लागत महत्वपूर्ण हो, खासकर उन टीमों के लिए जिन्हें स्थिर output, flexible deployment और scaling की ज़रूरत हो।

Input

$0.15/1M

Output

$0.58/1M

Cached

$0.01/1M

Batch

$0.07/1M

Calculate your QwQ 32B bill.

Set your workload — see cost at your exact volume.

What would QwQ 32B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

QwQ 32B at a glance.

Memory

131,072

tokens

Max reply

131,072

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

qwen

Released

Mar 2025

Training cutoff

Oct 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aime_2024

    79.5
  • ifeval

    83.9
  • livecodebench

    63.4
  • bbh

    2.87
  • mmlu_pro

    2.18

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick QwQ 32B

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

QwQ 32B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, QwQ 32B costs roughly $34 per month. Input is $0.15 /1M tokens and output is $0.58 /1M tokens.
QwQ 32B has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, QwQ 32B supports deep step-by-step reasoning, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
QwQ 32B was released in March 2025, with training data cut off around October 2024.
Models in a similar class include Qwen3 Coder Next, Qwen3.5-35B-A3B, Qwen3 235B A22B Thinking 2507. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare QwQ 32B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.