Shadow AI · Tool risk profile
Claude.
by Anthropic · generative ai · Verified April 19, 2026
Vendor siteBase risk
3.2/ 5
Shadow AI · Tool risk profile
by Anthropic · generative ai · Verified April 19, 2026
Vendor siteBase risk
3.2/ 5
Anthropic does not train on customer inputs by default across consumer or enterprise tiers, which puts Claude in a stronger default privacy posture than most general-purpose chatbots. Risk concentrates in the new Computer Use and agent features, which can take real actions in browsers and applications, and in the long-context window which makes large-scale document leakage easier in a single chat. Enterprise tier adds SSO, audit logs, and a no-retention option suitable for regulated workflows.
Tier comparison
Free
mediumPaid · consumer
mediumEnterprise · team
lowSafer alternatives
FAQ
No — Anthropic does not use prompts or completions to train its base models by default on any tier. Training data uses are documented in their Usage Policy.
Anthropic offers BAAs for Claude Enterprise and the Claude API; consumer tiers are not appropriate for PHI without one.
Anthropic processes data primarily in US regions; EU data residency is available on the Claude Enterprise plan via Amazon Bedrock or Google Vertex AI.
Audit your shadow AI
Run a free 12-minute audit to surface every shadow AI tool on your network, score the risk, and walk away with a block-list your IT team can import.
Buzzi.ai publishes tool risk profiles for informational purposes only. Always validate terms with the vendor before operational decisions.