AI21 Labs: Jamba Large 1.7

Public pricingIntelligence 79/100Large memoryTool use

AI21: Jamba Large 1.7 is a text model for general chat, analysis, and production use. It combines low latency and efficient inference with a 256K tokens context window and a balanced-cost profile. Use it for general chat, analysis, and production workloads when latency, cost, and throughput matters.

Input

$2.00/1M

Output

$8.00/1M

Cached

$0.20/1M

Batch

$1.00/1M

Calculate your Jamba Large 1.7 bill.

Set your workload — see cost at your exact volume.

What would Jamba Large 1.7 cost you?

Adjust the workload to see your monthly bill.

1,00010,00050,000250,0001M10M

Technical specifications

Jamba Large 1.7 at a glance.

Memory

256,000

tokens

Max reply

4,096

tokens

Memory tier

Large

an entire book or large codebase

Tokenizer

Released

Mar 2025

Training cutoff

Oct 2024

Availability

Public pricing

Status

active

Benchmarks

Quality benchmarks

Independent evaluations from public leaderboards. Higher is better.

  • aa_intelligence_index

    11

What it can do

Capabilities & limits.

  • Understands images
  • Deep step-by-step thinking
  • Uses tools / calls functions
  • Strict JSON output
  • Streams replies
  • Fine-tunable on your data

When to pick Jamba Large 1.7

  • Agentic workflows that call tools or APIs.
  • Long documents, full codebases, or extensive chat histories.

When to look elsewhere

  • Your workload involves images — pick a vision-capable model instead.

FAQ

Jamba Large 1.7 — the questions we see most.

Pricing, capabilities, alternatives — generated from the same data that powers the calculator above.

Get instant answers from our AI agent

At a typical workload of 50,000 conversations a month with 1,500-token prompts and 800-token replies, Jamba Large 1.7 costs roughly $470 per month. Input is $2.00 /1M tokens and output is $8.00 /1M tokens.
Jamba Large 1.7 has a 256,000-token context window (large memory — an entire book or large codebase). That means you can fit about 48,000 words of input and history in a single call.
Beyond text generation, Jamba Large 1.7 supports calling functions / tools, strict JSON output, fine-tuning on your own data. It streams replies by default.
Jamba Large 1.7 was released in March 2025, with training data cut off around October 2024.
Models in a similar class include Gemini 3.1 Pro Preview, Gemini 3.1 Pro Preview Custom Tools, GPT-4.1. The "Similar models" section below this FAQ links into each.

Still unsure?

Compare Jamba Large 1.7 against 100+ other models.

Open the full wizard — pick a use case, set your usage, and see side-by-side monthly costs in under a minute.