Semantic Intent Fragmentation: A Single-Shot Compositional Attack on Multi-Agent AI Pipelines
arXiv cs.AI / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces “Semantic Intent Fragmentation (SIF),” an attack against LLM orchestration systems where a single benign request yields subtasks that individually pass safety checks but collectively violate policy.
- SIF is shown to exploit OWASP LLM06:2025 via mechanisms including bulk scope escalation, silent data exfiltration, embedded trigger deployment, and quasi-identifier aggregation, without needing prompt injection, system changes, or post-initial attacker interaction.
- In 14 enterprise-style red-teaming scenarios (financial reporting, information security, HR analytics), a GPT-20B orchestrator generated policy-violating plans in 71% of cases (10/14) while each subtask appeared benign to subtask-level classifiers.
- The authors validate the attack with deterministic taint analysis, chain-of-thought evaluation, and a cross-model compliance judge with 0% false positives, and find that stronger orchestrators can increase SIF success rates.
- They argue the compositional safety gap can be addressed by adding plan-level information-flow tracking and compliance evaluation, detecting all attacks before execution in their tests.
Related Articles

Black Hat Asia
AI Business
I built the missing piece of the MCP ecosystem
Dev.to

When Agents Go Wrong: AI Accountability and the Payment Audit Trail
Dev.to
Google Gemma 4 Review 2026: The Open Model That Runs Locally and Beats Closed APIs
Dev.to
OpenClaw Deep Dive Guide: Self-Host Your Own AI Agent on Any VPS (2026)
Dev.to