OpenAI: GPT-4o (2024-11-20)

Public pricingIntelligence 66/100Medium memoryVisionTool-Nutzung

OpenAI: GPT-4o (2024-11-20) ist ein multimodal-Modell für Vision-Sprache-Verständnis. Es verbindet multimodale Eingabeverarbeitung und Bildverständnis, einen Kontext von 128K tokens und ein Premium-Profil für zuverlässige Arbeit über vision-language understanding and content analysis.

Input

$2.50/1M

Output

$10.00/1M

Cached

$1.25/1M

Batch

$1.25/1M

Calculate your GPT-4o (2024-11-20) bill.

Set your workload — see cost at your exact volume.

What would GPT-4o (2024-11-20) cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

GPT-4o (2024-11-20) at a glance.

Memory

128,000

tokens

Max reply

16,384

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

tiktoken-o200k

Released

Nov 2024

Training cutoff

Oct 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • gpqa_diamond

    47.89
  • humanitys_last_exam

    2.72
  • mmlu

    88.7
  • swe_bench_verified

    30.99

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick GPT-4o (2024-11-20)

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Code generation, review, or refactoring.
  • Multimodal pipelines mixing text + images.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

GPT-4o (2024-11-20) — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, GPT-4o (2024-11-20) costs roughly $588 per month. Input is $2.50 /1M tokens and output is $10.00 /1M tokens.
GPT-4o (2024-11-20) has a 128,000-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,000 words of input and history in a single call.
Beyond text generation, GPT-4o (2024-11-20) supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
GPT-4o (2024-11-20) was released in November 2024, with training data cut off around October 2023.
Models in a similar class include GPT Audio, GPT-4o Audio, GPT-4o Search Preview. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare GPT-4o (2024-11-20) against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.