Gratis · ~12 minuten · Geen login

Scope en risk-assess je MCP-integratie voordat je live gaat.

Welke systemen te koppelen. Welke MCP-server per systeem. Welk auth-pattern. Waar de prompt-injection en exfiltration hotspots zitten. Build-effort in engineer-weken. Een scoping-artefact dat senior engineers kunnen overhandigen aan security en het scrum-team.

Hoe het werkt

Beschrijf → Map → Ship.
Twaalf minuten, geen demo-magie.

  1. Beschrijf

    Vertel ons over de integratie.

    Kies de systemen die je wilt koppelen, configureer toegang per systeem en beantwoord nog acht vragen over autonomie, schaal, auth, deployment en regelgevende context.

  2. Map

    Een rule engine, geen onderbuik.

    Harde regels kiezen het architectuurpatroon. Auth per systeem volgt de matrix. Effort vermenigvuldigt met autonomie en regelgevende last. OWASP LLM Top 10 mapt op de pipeline.

  3. Ship

    Een scoping-artefact dat je team kan gebruiken.

    Per-systeem serverkeuzes met auth-schema en scopes, een effort-range in engineer-weken en de vijf risk hotspots met concrete mitigaties.

Patterns

Vier architectuurpatronen. De wizard kiest er één.

  • Lokale stdio

    Single-user dev tools en strikt on-prem met hoge gevoeligheid. Geen remote attack surface — maar ook geen gedeeld gebruik.

  • Remote (SSE/HTTP)

    Het cloud-default patroon. Servers draaien als services, transports zijn observable en auth integreert met je IdP.

  • Hybride

    Mix gevoeligheidsniveaus: hoog-gevoelig achter een gateway, laag-risico direct. Pragmatisch voor evoluerende rollouts.

  • MCP gateway

    Centrale auth, rate-limit, prompt-injection filtering, audit log. Juist als schaal + writes + regelgeving allemaal opduiken.

Methodologie

OWASP-gemapt. Effort-formule in plain text.

De risk-pipeline is gemapt op OWASP LLM Top 10 (2026), met severity per node op basis van je inputs. De effort-formule is gepubliceerd in pseudocode — autonomie-multiplier × regelgevende multiplier × (basis scaffolding + dagen per server + gateway-dagen).

Lees volledige methodologie

FAQ

Veelgestelde vragen over MCP-risk en scoping.

What is MCP?

MCP (Model Context Protocol) is an open standard introduced by Anthropic for connecting LLMs and agents to external tools, data, and systems through a uniform server interface. It is increasingly the default integration layer for production agents.

Why use this tool?

Because scoping and risk-assessing a real MCP rollout is the bottleneck. You decide which systems to connect, which servers per system, the auth scheme, the gateway question, and the OWASP-mapped risk hotspots. Twelve minutes versus a multi-week scoping engagement.

When do I use remote MCP vs local MCP?

Local stdio for single-user dev tools and strict on-prem with high sensitivity. Remote SSE/HTTP for cloud deployments at any meaningful scale. Hybrid when you mix low- and high-sensitivity workloads. Gateway when you have writes + scale + regulatory.

When do I need an MCP gateway?

When you have high autonomy + write-capable servers, or scale (10K+ MAU) + multi-tenant + a regulatory constraint. The gateway centralises OAuth, rate-limiting, prompt-injection filtering, and audit log.

How do I choose an auth scheme?

See the matrix on the methodology page. B2C → OAuth 2.1 + short tokens. B2B with high sensitivity → OAuth 2.1 user delegated + per-session rotation. On-prem regulated → mTLS + short-lived JWT issued by internal CA.

What does least privilege mean for MCP?

Read-only by default, write only when explicitly required, admin scopes never. Per-user audit log. Allowlist tools per server. Rate limit at the gateway when one is recommended, otherwise at the server.

Is MCP safe for HIPAA / PCI / SOX / GDPR?

Yes when scoped correctly: short-lived tokens, mTLS for on-prem, full audit log, per-tool allowlist, and human-in-the-loop on writes. Each regulation adds specific requirements you can find on the methodology page.

What's the OWASP LLM Top 10 mapping?

User→LLM = LLM01 prompt injection. LLM→MCP = LLM06/LLM08 over-permissive scope and excessive agency. MCP→Downstream = LLM08/LLM09 exfiltration. Downstream→LLM = LLM01/LLM03 indirect injection. LLM→User = LLM02 sensitive disclosure.

Why is your effort range so wide?

Because multipliers compound. A four-server integration at the wrong autonomy + regulatory level easily doubles. We always show a range with the breakdown so you can see what is driving the spread.

Should I build a custom MCP server or use a public one?

Use the official one when it exists. High-quality community servers when not. Build custom only when you have a domain-specific protocol or compliance constraint that no public server meets.

Can I self-host MCP servers?

Yes. Almost every server in the registry can run on your own infrastructure. The tool flags which ones, what dependencies they need, and the typical sizing.

What about data residency?

For strict residency, host the MCP server in-region or on-prem. The tool surfaces residency implications in the per-system table when your regulatory constraints are set.

How often do you refresh the registry?

modelcontextprotocol.io weekly scan, MCP.so quarterly, PulseMCP quarterly, spec repo RSS in real-time. Editorial review quarterly.

How do I prevent prompt injection via MCP?

Tool allowlist per server, per-session scope narrowing, output scanning for known injection patterns, and per-tool confirmation on writes when autonomy is below post-review.

How do I roll this out across a larger enterprise?

Gateway-first, then a phased server rollout (3 servers per quarter is realistic at first). Centralise audit + observability before scaling beyond five servers.

Buzzi services

Bouw je agents? Buzzi.ai levert MCP-geïntegreerde systemen in 6 weken.

We hebben MCP-gateways en 12-server integraties geleverd onder HIPAA, SOX en GDPR. Scoping van twee weken, build van vier weken, volledig audit-trail standaard.