Meta: Llama 4 Scout

Public pricingIntelligence 73/100Large memoryVisioneUso di strumenti

Meta: Llama 4 Scout è un modello multimodale pensato per comprensione visione-linguaggio. Unisce la gestione multimodale degli input e la comprensione delle immagini, un contesto di 328K tokens e un profilo a basso costo per un lavoro affidabile in vision-language understanding and content analysis.

Input

$0.08/1M

Output

$0.30/1M

Cached

$0.02/1M

Batch

$0.04/1M

Calculate your Llama 4 Scout bill.

Set your workload — see cost at your exact volume.

What would Llama 4 Scout cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Llama 4 Scout at a glance.

Memory

327,680

tokens

Max reply

16,384

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

sentencepiece

Released

Apr 2025

Training cutoff

Aug 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • gpqa_diamond

    57.2
  • humaneval

    74.1
  • math

    50.3
  • mmlu

    79.6

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Llama 4 Scout

  • Screenshot analysis, image understanding, or document OCR.
  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Very latency-sensitive, real-time apps where every millisecond counts.

FAQ

Llama 4 Scout — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Llama 4 Scout costs roughly $18 per month. Input is $0.08 /1M tokens and output is $0.30 /1M tokens.
Llama 4 Scout has a 327,680-token context window (large memory — an entire book or large codebase). That means you can fit about 61,440 words of input and history in a single call.
Beyond text generation, Llama 4 Scout supports understanding images, calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Llama 4 Scout was released in April 2025, with training data cut off around August 2024.
Models in a similar class include Llama 4 Maverick, Gemma 4 26B A4B, Seed 1.6 Flash. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Llama 4 Scout against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.