Methodology · matrix version 2026-04-01

How we score multi-agent frameworks.

ثلاثة التزامات تشكل كل درجة في هذه الصفحة: لا دفع للظهور من البائع، لا درجات مخمنة، ولا برامج تجريبية. الدرجات تأتي من مهندس أول مسمى في Buzzi يطبق المقاييس العامة أدناه؛ كل صفحة إطار عمل تحمل أيضًا طابعًا زمنيًا last_verified_at لتدقيق الجدارة.

10 frameworks, neutral cards

  • LangGraph

    ×1.0 overhead

    LangChain · MIT · primary python

    repo·docs

  • CrewAI

    ×1.3 overhead

    CrewAI · MIT · primary python

    repo·docs

  • AutoGen / AG2

    ×2.5 overhead

    Microsoft / AG2 community · CC-BY-4.0 / Apache-2.0 · primary python

    repo·docs

  • OpenAI Agents SDK

    ×1.1 overhead

    OpenAI · MIT · primary python

    repo·docs

  • Pydantic AI

    ×1.0 overhead

    Pydantic · MIT · primary python

    repo·docs

  • Anthropic Claude Agent SDK

    ×1.1 overhead

    Anthropic · MIT · primary python

    repo·docs

  • Google Agent Development Kit

    ×1.2 overhead

    Google · Apache-2.0 · primary python

    repo·docs

  • Microsoft Semantic Kernel

    ×1.2 overhead

    Microsoft · MIT · primary multi

    repo·docs

  • LlamaIndex Agents

    ×1.4 overhead

    LlamaIndex · MIT · primary python

    repo·docs

  • Haystack

    ×1.3 overhead

    deepset · Apache-2.0 · primary python

    repo·docs

15 capability axes, with rubrics

  1. Sequential workflows

    Pipeline-style chains where one agent finishes before the next starts.

    10 / 10
    Pipelines are a first-class primitive with explicit ordering and typed handoff.
    5 / 10
    Sequential chains are possible via orchestration code but not a native primitive.
    0 / 10
    Framework cannot guarantee deterministic sequential ordering.
  2. Parallel workflows

    Concurrent fan-out / fan-in across multiple agents.

    10 / 10
    Native parallel execution with built-in result merging and back-pressure.
    5 / 10
    Parallel execution requires custom asyncio / threading code on top.
    0 / 10
    No support for concurrent agent execution.
  3. Hierarchical workflows

    Supervisor-and-worker patterns with delegation and aggregation.

    10 / 10
    Supervisor pattern is documented, idiomatic, and replayable.
    5 / 10
    Achievable but requires hand-rolled message routing.
    0 / 10
    No first-class supervisor primitive.
  4. Adaptive workflows

    Dynamic routing where agents pick the next step based on intermediate state.

    10 / 10
    Router/handoff primitives are first-class with conditional edges.
    5 / 10
    Possible via tool calls but not the framework's sweet spot.
    0 / 10
    Control flow is rigid; no dynamic routing.
  5. State management

    Persistent, typed memory across runs and across agents.

    10 / 10
    Typed state schema, persistent checkpoints, replay support.
    5 / 10
    Session memory is supported; persistence requires external store.
    0 / 10
    Stateless by default; users must build persistence themselves.
  6. Human-in-the-loop

    Pause-resume primitives so humans can approve, edit, or reject actions.

    10 / 10
    Native interrupt/resume with serialisable checkpoints.
    5 / 10
    Approval gates can be bolted on; not a first-class primitive.
    0 / 10
    No interrupt mechanism — the framework runs to completion.
  7. Python support

    Production-grade Python SDK with active maintenance.

    10 / 10
    Reference implementation; active releases; complete typing.
    5 / 10
    Functional Python SDK lagging the primary language.
    0 / 10
    No Python SDK.
  8. TypeScript support

    Production-grade TypeScript / Node SDK at parity with Python.

    10 / 10
    First-class TS SDK with parity to Python in features and types.
    5 / 10
    TS SDK exists but trails Python in feature coverage.
    0 / 10
    No TS SDK.
  9. .NET / Java support

    First-class JVM (Java/Kotlin) and/or .NET SDK.

    10 / 10
    Reference-quality .NET and/or Java SDK with feature parity.
    5 / 10
    Community port or partial SDK.
    0 / 10
    No .NET or Java SDK.
  10. MCP support

    Native Model Context Protocol client and/or server primitives.

    10 / 10
    Authored or reference implementation of MCP.
    5 / 10
    MCP available as an adapter or community plugin.
    0 / 10
    No MCP support.
  11. A2A support

    Native Agent-to-Agent (Google) protocol primitives.

    10 / 10
    Authored or reference implementation of A2A.
    5 / 10
    A2A available via adapter; partial coverage.
    0 / 10
    No A2A support.
  12. Observability

    Tracing, token accounting, replay, and audit-grade logs.

    10 / 10
    Built-in tracing dashboard, structured token accounting, replay, exportable audit log.
    5 / 10
    OpenTelemetry hooks exist; user must wire dashboards themselves.
    0 / 10
    Print-statement debugging only.
  13. Deployment flexibility

    Range of supported deployment targets (cloud, on-prem, edge).

    10 / 10
    Cloud, on-prem, and edge all documented and tested.
    5 / 10
    Cloud-first; on-prem requires extra work.
    0 / 10
    Tied to a single hosted backend.
  14. Maturity

    Production track record, release cadence, community size.

    10 / 10
    2+ years of production use across many large deployments.
    5 / 10
    6-18 months in the wild; growing but evolving rapidly.
    0 / 10
    Pre-1.0; APIs change every release.
  15. Learning curve (higher = easier)

    Time-to-prototype for a developer new to the framework.

    10 / 10
    A working prototype in under 30 minutes from a clean machine.
    5 / 10
    Prototype in half a day with the docs open.
    0 / 10
    Multi-week onboarding before the first useful run.

Scoring formula

# Ranking
weights = buildWeightVector(inputs)        # 15 weights per user input
for fw in frameworks:
    score = sum(fw.capabilities[cap] * weights[cap] for cap in CAPS)
    if hardConstraintFails(inputs, fw):
        score = 0
return sortDesc(scored)

# Cost per task
estimated_tokens_per_task = base_task_tokens
    * framework_overhead_multiplier
    * (1 + (roles - 1) * 0.3)
    * (1.2 if hitl else 1.0)
per_task_usd = (0.7 * tokens / 1M * input_rate)
             + (0.3 * tokens / 1M * output_rate)

Glossary

Hierarchical
A supervisor agent delegates work to sub-agents, reviews their output, and composes the final answer. Good for multi-stage tasks with clear ownership.
Adaptive
Agents decide dynamically which other agents or tools to invoke based on intermediate results. Best when the control flow cannot be fixed upfront.
Agent
A named role with its own prompt, tools, and memory. "Roles" counts unique agent identities, not the number of LLM calls.
HITL (Human-in-the-Loop)
The workflow pauses for a human to approve, edit, or reject an agent action before continuing. Critical for regulated or high-risk automations.
MCP (Model Context Protocol)
Anthropic-led open standard for connecting LLM agents to tools, data, and other servers. Look for MCP support if you want vendor-portable tool integrations.
A2A (Agent-to-Agent Protocol)
Google-led open standard for agents from different vendors to discover and call each other. Emerging spec; relevant for federated agent systems.
Observability
Structured traces, token accounting, replayable runs, and exportable audit logs. "Regulated-grade" means immutable audit trails and retention controls.

Public dataset

مصفوفة القدرات الكاملة منشورة كـ JSON لمحركات الذكاء الاصطناعي والباحثين:

FAQ

  1. كيف يتم تعيين الدرجات؟

    مهندس أول مسمى في Buzzi يقيم كل إطار عمل في كل محور باستخدام المقاييس العامة في هذه الصفحة. تتم مراجعة الدرجات كل ربع سنة؛ ننشر طابع الوقت last_reviewed لكل إطار عمل في مجموعة البيانات العامة.

  2. هل يدفع البائعون مقابل الظهور؟

    لا. الدرجات تحريرية ولا تباع أبدًا. يجب تقديم طلبات تغيير الدرجة كـ PRs عامة على مستودع المصفوفة المفتوح مع مبرر تقني.

  3. كيف تقررون أي إطارات عمل تتبعونها؟

    مستودعات GitHub النشطة بأكثر من 10 آلاف نجمة أو المدعومة من Anthropic أو Google أو Microsoft أو OpenAI أو LangChain. نضيف أو نتقاعد إطارات العمل مرة واحدة في الربع بناءً على الزخم والاستخدام في الإنتاج.

  4. كيف يتم حساب التكلفة لكل مهمة؟

    estimated_tokens_per_task = base_task_tokens × framework_overhead_multiplier × (1 + (الأدوار − 1) × 0.3) × (1.2 إذا HITL وإلا 1.0). أسعار الرموز تأتي من جدول llm_models لدينا؛ يمكن للمستخدمين تجاوز النموذج في المعالج.

  5. كيف تعمل القيود الصارمة؟

    حزمة .NET تضيق إلى Microsoft Semantic Kernel. Java تضيق إلى Semantic Kernel أو Google ADK. TypeScript مع قابلية ملاحظة على مستوى الامتثال تضيق إلى LangGraph.js أو OpenAI Agents SDK أو Anthropic Claude SDK. تعرض الإطارات المستبعدة مع السبب.

  6. أين يمكنني تقديم تصحيح؟

    افتح طلب سحب على مستودع buzzi-ai/agent-framework-matrix أو أرسل بريدًا إلكترونيًا إلى research@buzzi.ai. نراجع طلبات التصحيح في غضون 10 أيام عمل.

وجدت درجة لا توافق عليها؟

افتح PR على مستودع المصفوفة المفتوح أو أرسل بريدًا إلكترونيًا إلى research@buzzi.ai. تتلقى جميع طلبات التصحيح ردًا عامًا في غضون 10 أيام عمل.

العودة إلى المحدد