What is MCP?+
MCP (Model Context Protocol) is an open standard introduced by Anthropic for connecting LLMs and agents to external tools, data, and systems through a uniform server interface. It is increasingly the default integration layer for production agents.
Why use this tool?+
Because scoping and risk-assessing a real MCP rollout is the bottleneck. You decide which systems to connect, which servers per system, the auth scheme, the gateway question, and the OWASP-mapped risk hotspots. Twelve minutes versus a multi-week scoping engagement.
When do I use remote MCP vs local MCP?+
Local stdio for single-user dev tools and strict on-prem with high sensitivity. Remote SSE/HTTP for cloud deployments at any meaningful scale. Hybrid when you mix low- and high-sensitivity workloads. Gateway when you have writes + scale + regulatory.
When do I need an MCP gateway?+
When you have high autonomy + write-capable servers, or scale (10K+ MAU) + multi-tenant + a regulatory constraint. The gateway centralises OAuth, rate-limiting, prompt-injection filtering, and audit log.
How do I choose an auth scheme?+
See the matrix on the methodology page. B2C → OAuth 2.1 + short tokens. B2B with high sensitivity → OAuth 2.1 user delegated + per-session rotation. On-prem regulated → mTLS + short-lived JWT issued by internal CA.
What does least privilege mean for MCP?+
Read-only by default, write only when explicitly required, admin scopes never. Per-user audit log. Allowlist tools per server. Rate limit at the gateway when one is recommended, otherwise at the server.
Is MCP safe for HIPAA / PCI / SOX / GDPR?+
Yes when scoped correctly: short-lived tokens, mTLS for on-prem, full audit log, per-tool allowlist, and human-in-the-loop on writes. Each regulation adds specific requirements you can find on the methodology page.
What's the OWASP LLM Top 10 mapping?+
User→LLM = LLM01 prompt injection. LLM→MCP = LLM06/LLM08 over-permissive scope and excessive agency. MCP→Downstream = LLM08/LLM09 exfiltration. Downstream→LLM = LLM01/LLM03 indirect injection. LLM→User = LLM02 sensitive disclosure.
Why is your effort range so wide?+
Because multipliers compound. A four-server integration at the wrong autonomy + regulatory level easily doubles. We always show a range with the breakdown so you can see what is driving the spread.
Should I build a custom MCP server or use a public one?+
Use the official one when it exists. High-quality community servers when not. Build custom only when you have a domain-specific protocol or compliance constraint that no public server meets.
Can I self-host MCP servers?+
Yes. Almost every server in the registry can run on your own infrastructure. The tool flags which ones, what dependencies they need, and the typical sizing.
What about data residency?+
For strict residency, host the MCP server in-region or on-prem. The tool surfaces residency implications in the per-system table when your regulatory constraints are set.
How often do you refresh the registry?+
modelcontextprotocol.io weekly scan, MCP.so quarterly, PulseMCP quarterly, spec repo RSS in real-time. Editorial review quarterly.
How do I prevent prompt injection via MCP?+
Tool allowlist per server, per-session scope narrowing, output scanning for known injection patterns, and per-tool confirmation on writes when autonomy is below post-review.
How do I roll this out across a larger enterprise?+
Gateway-first, then a phased server rollout (3 servers per quarter is realistic at first). Centralise audit + observability before scaling beyond five servers.