Best LLM for Function Calling / Tool Use

Ranked on tool-selection accuracy, multi-tool consistency, and price. Tool-use quality compounds in agent loops.

Updated April 2026. Top 3 this month: GPT-5, Gemini 2 Pro, Claude Opus 4.7.

How we rank

Function calling is the connective tissue of agent systems. A model that picks the wrong tool once in 20 calls is unacceptable for any non-trivial automation. We weight tool-selection accuracy and multi-tool benchmarks heavily, then price.

Pillars and weights: tool selection (45%) · multi-tool (30%) · price (25%). Our full methodology is published on the methodology page.

Top ranked models

RankModelProviderInput $/1MOutput $/1MContext
1GPT-5OpenAI$1.25$10.00200,000
2Gemini 2 ProGoogle$3.50$10.502,000,000
3Claude Opus 4.7Anthropic$5.00$25.00200,000
4GPT-5 nanoOpenAI$0.05$0.40400,000
5Gemini 2.0 FlashGoogle$0.10$0.401,000,000
6GPT-4.1 nanoOpenAI$0.10$0.401,000,000
7GPT-4o miniOpenAI$0.15$0.60128,000
8DeepSeek V3.2DeepSeek$0.27$1.10128,000
9DeepSeek V3DeepSeek$0.27$1.10128,000
10GPT-4.1 miniOpenAI$0.40$1.601,000,000

Tips for function calling / tool use

  • Keep the tool list short and well-named. Long tool lists degrade accuracy.
  • Use JSON schemas with required fields to reduce malformed calls.
  • Log tool failures and retry with a fallback model tier if needed.

Frequently asked questions

Which LLM is best at tool use?

As of April 2026, our weighted top 3 are GPT-5, Gemini 2 Pro, Claude Opus 4.7.

How many tools is too many?

Accuracy drops noticeably past ~30 tools in a single call. Route to a smaller toolset per conversation turn when you can.

Are tool-use benchmarks trustworthy?

Directionally yes — run the top 2 on your actual tool catalog before committing.

Related tasks

Want to model your own workload? Use the volume and switch-cost calculators on the main tool page. Sign in with Google to unlock compare-my-prompt with real tokenizer counts.

Data refreshed daily via our snapshot cron. See our public JSON API for programmatic access.