How we score shadow AI risk
Per-tool risk model
Every tool starts with a base risk score (1.0 to 5.0) reflecting the intrinsic data-handling profile of the product: the default training-data stance, the breadth of data access, and the level of admin control available.
Four multipliers apply in sequence:
- Tier multiplier β free / paid-consumer / enterprise, capturing whether the tier contractually excludes training-on-inputs and enables SSO.
- Regulated-data multiplier β rises with PHI / MNPI / CJIS / PCI / GDPR-EU / generic PII.
- Governance gap multiplier β a multiplicative adjustment based on the organisation's overall governance score, amplifying realised risk when controls are weak.
- Tier-level flags β recorded in rationale strings so users can see exactly which tier-specific attributes pushed the score.
The final score is clamped to 1-10 and mapped to a band: LOW (< 4), MEDIUM (4-5.99), HIGH (6-7.99), CRITICAL (8+).
Governance score
Computed from 12 weighted governance questions covering policy, procurement, visibility, access, privacy, training, incident response, and monitoring. Each answer converts to a 0-4 magnitude, weighted by question weight, and expressed on a 0-100 scale.
Overall risk level
A distribution heuristic:
- Any CRITICAL tool β overall CRITICAL.
- Three or more HIGH tools β overall HIGH.
- Otherwise a weighted average of the distribution, adjusted by governance score.
Registry verification policy
Each tool in the registry is verified at least every 90 daysagainst vendor-published documentation, with the timestamp stored aslast_verified_at. A weekly freshness cron flags tools past the 90-day window for editorial re-review. Changes (new incidents, policy updates, risk changes) are recorded in ai_tools_changelog and surface on tool profile pages.
Benchmark methodology
Peer benchmarks are anonymised aggregates across audits completed in the last 12 months, bucketed by industry + company size. Minimum sample size: 15 audits per bucket β we suppress the benchmark when data is thinner than that.
Framework alignment
Recommendations carry framework_tags referencing NIST AI RMF (Govern / Map / Measure / Manage), EU AI Act (article references), ISO/IEC 42001, and where relevant sector rules (HIPAA, SR 11-7, CJIS, PCI-DSS). This makes it easy to route actions to the owners who already track those frameworks.