AI Navigate

We Scanned 11,529 MCP Servers for EU AI Act Compliance

Dev.to / 3/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIndustry & Market Moves

Key Points

  • A scan of 11,529 MCP servers found 850 servers (7.4%) with EU AI Act compliance issues, and none had any EU AI Act documentation.
  • The biggest issue category is Missing Risk Documentation (Art. 9) affecting 438 servers (51.5%), including 187 with prompt injection vulnerabilities in tool descriptions, 156 with unvalidated external data flows, and 127 with no error handling documentation.
  • Insufficient Transparency (Art. 13) affected 312 servers (36.7%), with 134 missing capability boundaries, 107 lacking disclosure of cross-origin data access, and 96 with undisclosed capabilities beyond their stated purpose.
  • Robustness Gaps (Art. 15) affected 186 servers (21.9%), including 83 with excessive permission requests, 67 with command injection vulnerabilities, and 58 with exposed credentials in configurations.
  • Enforcement of the EU AI Act begins on August 2, 2026, with industry guidance placing compliance at 32-56 weeks and a phased remediation timeline; MCP servers—the interface layer between AI models and external tools—therefore become a critical compliance focus when handling personal data or operating in regulated domains.

We scanned every MCP server in the public registry — 11,529 of them — using 200 regex-based detection patterns across 15 languages. No LLM in the loop, no cloud dependency, pure deterministic analysis.

The headline number: 850 servers (7.4%) have compliance issues. Zero of them have any EU AI Act documentation.

The EU AI Act enters enforcement on August 2, 2026 — 134 days from now.

Why MCP Servers Matter for EU AI Act

MCP (Model Context Protocol) servers are the interface layer between AI models and external tools. When an AI agent reads your email, queries a database, or executes code — it goes through MCP.

Under Article 6/Annex III, these become compliance-relevant when they handle personal data or operate in regulated domains. And most of them do.

What We Found

1. Missing Risk Documentation (Art. 9) — 438 servers (51.5%)

The biggest category. Article 9 requires documented risk management for high-risk AI systems.

  • 187 servers: Prompt injection vulnerabilities in tool descriptions
  • 156 servers: Unvalidated external data flows
  • 127 servers: No error handling documentation

Real example: A file-system MCP server that accepts arbitrary paths without validation. An attacker-controlled prompt could read /etc/passwd through the AI agent. No risk documentation exists.

2. Insufficient Transparency (Art. 13) — 312 servers (36.7%)

Article 13 requires AI systems to be sufficiently transparent to enable deployers to interpret the system's output.

  • 134 servers: Missing capability boundaries — tools don't document what they can't do
  • 107 servers: Cross-origin data access without disclosure
  • 96 servers: Undisclosed capabilities beyond stated purpose

3. Robustness Gaps (Art. 15) — 186 servers (21.9%)

Article 15 requires AI systems to achieve an appropriate level of accuracy, robustness and cybersecurity.

  • 83 servers: Excessive permission requests
  • 67 servers: Command injection vulnerabilities
  • 58 servers: Exposed credentials in configurations

The Timeline Problem

Industry guidance says full EU AI Act compliance takes 32-56 weeks:

Phase Duration
Risk classification 2-4 weeks
Gap analysis 4-8 weeks
Remediation 12-24 weeks
Conformity assessment 8-16 weeks
Monitoring setup 4-8 weeks
Minimum total 224 days

134 days remain. The math doesn't work for anyone starting now.

How We Built the Scanner

No LLM-in-the-loop. Here's why:

The obvious approach is using another LLM to detect prompt injection. But that creates a circular dependency — the attacker controls what the LLM sees. Queen's University tested this on 1,899 MCP servers: system prompt restrictions reduced attack success by only 0.65%.

Instead, we use a 10-stage preprocessing pipeline:

  1. Leetspeak normalization (1gn0r3ignore)
  2. Zero-width character stripping (U+200B, U+FEFF)
  3. Homoglyph detection (Cyrillic а vs Latin a)
  4. Unicode fullwidth normalization
  5. Base64 decoding of embedded payloads
  6. HTML entity unescaping
  7. ROT13/Caesar detection
  8. Whitespace normalization
  9. Cross-line joining
  10. Case normalization with context preservation

Then 200 regex patterns across 9 categories and 15 languages. Sub-10ms response time. F1 = 98.0% on 262 test cases.

Deterministic. Auditable. No hallucinated false negatives.

What You Should Do

If you maintain an MCP server:

  • Run an automated scan against your tool descriptions
  • Document capabilities explicitly (what your tool does AND what it doesn't)
  • Validate all inputs — especially file paths, URLs, and SQL
  • Add risk metadata to your server manifest

If you deploy MCP servers in production:

  • Inventory every MCP server your AI agents connect to
  • Classify by risk level under Annex III
  • Start compliance assessment now — not next quarter

If you're a security team:

  • MCP is your next attack surface. Treat it like APIs in 2015.

Try It

The scanner is open source (MIT):

Questions about the methodology, detection patterns, or how to scan your own MCP servers? Drop a comment — happy to go deep on the technical details.