免费 · 约 12 分钟 · 免登录
在交付前为你的 MCP 集成定范围并评估风险。
连接哪些系统。每个系统选哪个 MCP 服务器。哪种认证模式。提示注入和数据外泄的热点在哪里。以工程师周计的开发工作量。一份高级工程师可交给安全和敏捷团队的范围说明物。
免费 · 约 12 分钟 · 免登录
连接哪些系统。每个系统选哪个 MCP 服务器。哪种认证模式。提示注入和数据外泄的热点在哪里。以工程师周计的开发工作量。一份高级工程师可交给安全和敏捷团队的范围说明物。
工作原理
描述
选择你计划连接的系统、按系统配置访问权限,并回答关于自主性、规模、认证、部署和监管背景的另外八个问题。
映射
硬规则挑选架构模式。按系统认证遵循矩阵。工作量按自主性和监管负担相乘。OWASP LLM Top 10 映射到流水线。
交付
按系统的服务器选型、附带认证方案与作用域,以工程师周计的工作量区间,以及五大风险热点与具体缓解措施。
模式
单用户开发工具与高敏感的严格本地部署。无远程攻击面——但也无法共享使用。
云优先的默认模式。服务器作为服务运行,传输可观测,认证与你的 IdP 集成。
混合敏感度等级:高敏感放在网关后,低风险直连。务实地适应不断演进的推广。
集中式认证、限流、提示注入过滤、审计日志。当规模 + 写操作 + 监管同时出现时正合适。
方法论
风险流水线映射至 OWASP LLM Top 10(2026),各节点的严重性基于你的输入。工作量公式以伪代码公布——自主性乘数 × 监管乘数 ×(基础脚手架 + 每服务器天数 + 网关天数)。
阅读完整方法论常见问题
MCP (Model Context Protocol) is an open standard introduced by Anthropic for connecting LLMs and agents to external tools, data, and systems through a uniform server interface. It is increasingly the default integration layer for production agents.
Because scoping and risk-assessing a real MCP rollout is the bottleneck. You decide which systems to connect, which servers per system, the auth scheme, the gateway question, and the OWASP-mapped risk hotspots. Twelve minutes versus a multi-week scoping engagement.
Local stdio for single-user dev tools and strict on-prem with high sensitivity. Remote SSE/HTTP for cloud deployments at any meaningful scale. Hybrid when you mix low- and high-sensitivity workloads. Gateway when you have writes + scale + regulatory.
When you have high autonomy + write-capable servers, or scale (10K+ MAU) + multi-tenant + a regulatory constraint. The gateway centralises OAuth, rate-limiting, prompt-injection filtering, and audit log.
See the matrix on the methodology page. B2C → OAuth 2.1 + short tokens. B2B with high sensitivity → OAuth 2.1 user delegated + per-session rotation. On-prem regulated → mTLS + short-lived JWT issued by internal CA.
Read-only by default, write only when explicitly required, admin scopes never. Per-user audit log. Allowlist tools per server. Rate limit at the gateway when one is recommended, otherwise at the server.
Yes when scoped correctly: short-lived tokens, mTLS for on-prem, full audit log, per-tool allowlist, and human-in-the-loop on writes. Each regulation adds specific requirements you can find on the methodology page.
User→LLM = LLM01 prompt injection. LLM→MCP = LLM06/LLM08 over-permissive scope and excessive agency. MCP→Downstream = LLM08/LLM09 exfiltration. Downstream→LLM = LLM01/LLM03 indirect injection. LLM→User = LLM02 sensitive disclosure.
Because multipliers compound. A four-server integration at the wrong autonomy + regulatory level easily doubles. We always show a range with the breakdown so you can see what is driving the spread.
Use the official one when it exists. High-quality community servers when not. Build custom only when you have a domain-specific protocol or compliance constraint that no public server meets.
Yes. Almost every server in the registry can run on your own infrastructure. The tool flags which ones, what dependencies they need, and the typical sizing.
For strict residency, host the MCP server in-region or on-prem. The tool surfaces residency implications in the per-system table when your regulatory constraints are set.
modelcontextprotocol.io weekly scan, MCP.so quarterly, PulseMCP quarterly, spec repo RSS in real-time. Editorial review quarterly.
Tool allowlist per server, per-session scope narrowing, output scanning for known injection patterns, and per-tool confirmation on writes when autonomy is below post-review.
Gateway-first, then a phased server rollout (3 servers per quarter is realistic at first). Centralise audit + observability before scaling beyond five servers.
第 1 步,共 9 · 系统
Next: 配置
选择所有计划集成的系统。下一步配置访问权限。
Communications
CRM & Sales
Productivity & Docs
Engineering & DevOps
Data
Infra
Ops & Commerce