DeepSeek: DeepSeek V3.2

Public pricingIntelligence 80/100Medium memoryRaciocínio profundoUso de ferramentas

DeepSeek: DeepSeek V3.2 é um modelo texto criado para raciocínio, planejamento e automação. Ele combina uso confiável de ferramentas e comportamento agêntico e raciocínio e planejamento profundos, um contexto de 131K tokens e um perfil baixo custo para entregar trabalho confiável em reasoning, planning, and multi-step automation.

Input

$0.25/1M

Output

$0.38/1M

Cached

$0.03/1M

Batch

$0.14/1M

Calculate your DeepSeek V3.2 bill.

Set your workload — see cost at your exact volume.

What would DeepSeek V3.2 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

DeepSeek V3.2 at a glance.

Memory

131,072

tokens

Max reply

32,768

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

deepseek

Released

Dec 2025

Training cutoff

Jul 2025

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    32
  • frontiermath_tier_4

    2.1
  • gpqa_diamond

    83.42

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick DeepSeek V3.2

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

DeepSeek V3.2 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, DeepSeek V3.2 costs roughly $34 per month. Input is $0.25 /1M tokens and output is $0.38 /1M tokens.
DeepSeek V3.2 has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, DeepSeek V3.2 supports deep step-by-step reasoning, calling functions / tools, strict JSON output. It streams replies by default.
DeepSeek V3.2 was released in December 2025, with training data cut off around July 2025.
Models in a similar class include DeepSeek V3.2 Exp, DeepSeek V3.1 Terminus, DeepSeek V3.1. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare DeepSeek V3.2 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.