Free · ~12 minutes · No login

Scope and risk-assess your MCP integration before you ship.

Which systems to connect. Which MCP server per system. Which auth pattern. Where the prompt-injection and exfiltration hotspots are. Build-effort in engineer-weeks. A scoping artifact senior engineers can hand to security and the scrum team.

How it works

Describe → Map → Ship.
Twelve minutes, no demo magic.

  1. Describe

    Tell us about the integration.

    Pick the systems you plan to connect, configure access per system, and answer eight more questions about autonomy, scale, auth, deployment, and regulatory context.

  2. Map

    A rule engine, not vibes.

    Hard rules pick the architecture pattern. Per-system auth follows the matrix. Effort multiplies by autonomy and regulatory load. OWASP LLM Top 10 maps to the pipeline.

  3. Ship

    A scoping artifact your team can use.

    Per-system server picks with auth scheme and scopes, an effort range in engineer-weeks, and the five risk hotspots with concrete mitigations.

Patterns

Four architecture patterns. The wizard picks one.

  • Local stdio

    Single-user dev tools and strict on-prem with high sensitivity. No remote attack surface — but no shared usage either.

  • Remote (SSE/HTTP)

    The cloud-default pattern. Servers run as services, transports are observable, and auth integrates with your IdP.

  • Hybrid

    Mix sensitivity tiers: high-sensitivity behind a gateway, low-risk direct. Pragmatic for evolving rollouts.

  • MCP gateway

    Centralised auth, rate-limit, prompt-injection filtering, audit log. Right when scale + writes + regulatory all show up.

Methodology

OWASP-mapped. Effort formula in plain text.

The risk pipeline is mapped to OWASP LLM Top 10 (2026), with severity per node based on your inputs. The effort formula is published in pseudocode — autonomy multiplier × regulatory multiplier × (base scaffolding + per-server days + gateway days).

Read full methodology

FAQ

Common questions about MCP risk and scoping.

What is MCP?

MCP (Model Context Protocol) is an open standard introduced by Anthropic for connecting LLMs and agents to external tools, data, and systems through a uniform server interface. It is increasingly the default integration layer for production agents.

Why use this tool?

Because scoping and risk-assessing a real MCP rollout is the bottleneck. You decide which systems to connect, which servers per system, the auth scheme, the gateway question, and the OWASP-mapped risk hotspots. Twelve minutes versus a multi-week scoping engagement.

When do I use remote MCP vs local MCP?

Local stdio for single-user dev tools and strict on-prem with high sensitivity. Remote SSE/HTTP for cloud deployments at any meaningful scale. Hybrid when you mix low- and high-sensitivity workloads. Gateway when you have writes + scale + regulatory.

When do I need an MCP gateway?

When you have high autonomy + write-capable servers, or scale (10K+ MAU) + multi-tenant + a regulatory constraint. The gateway centralises OAuth, rate-limiting, prompt-injection filtering, and audit log.

How do I choose an auth scheme?

See the matrix on the methodology page. B2C → OAuth 2.1 + short tokens. B2B with high sensitivity → OAuth 2.1 user delegated + per-session rotation. On-prem regulated → mTLS + short-lived JWT issued by internal CA.

What does least privilege mean for MCP?

Read-only by default, write only when explicitly required, admin scopes never. Per-user audit log. Allowlist tools per server. Rate limit at the gateway when one is recommended, otherwise at the server.

Is MCP safe for HIPAA / PCI / SOX / GDPR?

Yes when scoped correctly: short-lived tokens, mTLS for on-prem, full audit log, per-tool allowlist, and human-in-the-loop on writes. Each regulation adds specific requirements you can find on the methodology page.

What's the OWASP LLM Top 10 mapping?

User→LLM = LLM01 prompt injection. LLM→MCP = LLM06/LLM08 over-permissive scope and excessive agency. MCP→Downstream = LLM08/LLM09 exfiltration. Downstream→LLM = LLM01/LLM03 indirect injection. LLM→User = LLM02 sensitive disclosure.

Why is your effort range so wide?

Because multipliers compound. A four-server integration at the wrong autonomy + regulatory level easily doubles. We always show a range with the breakdown so you can see what is driving the spread.

Should I build a custom MCP server or use a public one?

Use the official one when it exists. High-quality community servers when not. Build custom only when you have a domain-specific protocol or compliance constraint that no public server meets.

Can I self-host MCP servers?

Yes. Almost every server in the registry can run on your own infrastructure. The tool flags which ones, what dependencies they need, and the typical sizing.

What about data residency?

For strict residency, host the MCP server in-region or on-prem. The tool surfaces residency implications in the per-system table when your regulatory constraints are set.

How often do you refresh the registry?

modelcontextprotocol.io weekly scan, MCP.so quarterly, PulseMCP quarterly, spec repo RSS in real-time. Editorial review quarterly.

How do I prevent prompt injection via MCP?

Tool allowlist per server, per-session scope narrowing, output scanning for known injection patterns, and per-tool confirmation on writes when autonomy is below post-review.

How do I roll this out across a larger enterprise?

Gateway-first, then a phased server rollout (3 servers per quarter is realistic at first). Centralise audit + observability before scaling beyond five servers.

Buzzi services

Building agents? Buzzi.ai ships MCP-integrated systems in 6 weeks.

We've shipped MCP gateways and 12-server integrations under HIPAA, SOX, and GDPR. Two-week scoping, four-week build, full audit-trail by default.