OpenAI: o1-pro

Public pricingIntelligence 79/100Large memoryVisionRéflexion approfondieOutils

OpenAI: o1-pro est un modèle multimodal conçu pour raisonnement et analyse multimodaux. Il associe la gestion multimodale des entrées et un raisonnement et une planification poussés, un contexte de 200K tokens et un profil premium pour un travail fiable sur raisonnement et analyse multimodaux.

Input

$150.00/1M

Output

$600.00/1M

Cached

$75.00/1M

Batch

$75.00/1M

Calculate your o1-pro bill.

Set your workload — see cost at your exact volume.

What would o1-pro cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

o1-pro at a glance.

Memory

200,000

tokens

Max reply

100,000

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

tiktoken-o200k

Released

Mar 2025

Training cutoff

Oct 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • humanitys_last_exam

    8.12

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick o1-pro

  • Multi-step reasoning, research agents, or hard math.
  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

o1-pro — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, o1-pro costs roughly $35,250 per month. Input is $150.00 /1M tokens and output is $600.00 /1M tokens.
o1-pro has a 200,000-token context window (large memory — an entire book or large codebase). That means you can fit about 37,500 words of input and history in a single call.
Beyond text generation, o1-pro supports understanding images, deep step-by-step reasoning, calling functions / tools, strict JSON output. It streams replies by default.
o1-pro was released in March 2025, with training data cut off around October 2023.
Models in a similar class include GPT-5, GPT-5 Codex, GPT-5 Image. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare o1-pro against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.