DeepSeek: R1 Distill Llama 70B

Public pricingIntelligence 66/100Medium memory深度思考工具调用

DeepSeek: R1 Distill Llama 70B 是一款文本模型,适合通用对话、分析与生产场景。它结合了稳定的通用表现、131K tokens上下文和均衡成本定位,可在general chat, analysis, and production workloads中提供可靠表现。它适合重视质量、速度与成本的团队,能带来稳定输出、灵活部署与扩展空间。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。 它适合需要稳定回答、较长上下文、清晰结构和可扩展部署的团队。

Input

$0.70/1M

Output

$0.80/1M

Cached

$0.17/1M

Batch

$0.35/1M

Calculate your R1 Distill Llama 70B bill.

Set your workload — see cost at your exact volume.

What would R1 Distill Llama 70B cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

R1 Distill Llama 70B at a glance.

Memory

131,072

tokens

Max reply

16,384

tokens

Memory tier

Medium

a long report or a codebase file

Tokenizer

llama3

Released

Jan 2025

Training cutoff

Dec 2023

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    16
  • aime_2024

    86.7
  • bbh

    35.82
  • gpqa_diamond

    65.2
  • ifeval

    43.36
  • livecodebench

    57.5
  • math

    94.5
  • mmlu_pro

    41.65

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick R1 Distill Llama 70B

  • Multi-step reasoning, research agents, or hard math.
  • Agentic workflows that call tools or APIs.
  • High-volume workloads where unit cost matters.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

R1 Distill Llama 70B — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, R1 Distill Llama 70B costs roughly $85 per month. Input is $0.70 /1M tokens and output is $0.80 /1M tokens.
R1 Distill Llama 70B has a 131,072-token context window (medium memory — a long report or a codebase file). That means you can fit about 24,576 words of input and history in a single call.
Beyond text generation, R1 Distill Llama 70B supports deep step-by-step reasoning, calling functions / tools, strict JSON output. It streams replies by default.
R1 Distill Llama 70B was released in January 2025, with training data cut off around December 2023.
Models in a similar class include R1 0528, DeepSeek V3.2 Exp, DeepSeek V3.2. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare R1 Distill Llama 70B against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.