Policy-Invisible Violations in LLM-Based Agents

arXiv cs.AI / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a new failure mode for LLM-based agents—“policy-invisible violations,” where actions are syntactically valid, user-approved, and semantically appropriate but still breach organizational policy due to missing policy-relevant facts at decision time.
  • It introduces PhantomPolicy, a benchmark covering eight categories of violations with tool responses that intentionally omit policy metadata, and it reports that human trace-level review changed 32 labels (5.3%) versus original annotations.
  • The study proposes Sentinel, an enforcement framework that grounds policy decisions in a simulated organizational knowledge-graph “post-action” world state using counterfactual graph simulation and invariant checks (Allow/Block/Clarify).
  • In evaluations against human-reviewed trace labels, Sentinel significantly improves accuracy over a content-only DLP baseline (reported as 93.0% vs. 68.8%) while keeping high precision, though some categories remain challenging.

Abstract

LLM-based agents can execute actions that are syntactically valid, user-sanctioned, and semantically appropriate, yet still violate organizational policy because the facts needed for correct policy judgment are hidden at decision time. We call this failure mode policy-invisible violations: cases in which compliance depends on entity attributes, contextual state, or session history absent from the agent's visible context. We present PhantomPolicy, a benchmark spanning eight violation categories with balanced violation and safe-control cases, in which all tool responses contain clean business data without policy metadata. We manually review all 600 model traces produced by five frontier models and evaluate them using human-reviewed trace labels. Manual review changes 32 labels (5.3%) relative to the original case-level annotations, confirming the need for trace-level human review. To demonstrate what world-state-grounded enforcement can achieve under favorable conditions, we introduce Sentinel, an enforcement framework based on counterfactual graph simulation. Sentinel treats every agent action as a proposed mutation to an organizational knowledge graph, performs speculative execution to materialize the post-action world state, and verifies graph-structural invariants to decide Allow/Block/Clarify. Against human-reviewed trace labels, Sentinel substantially outperforms a content-only DLP baseline (68.8% vs. 93.0% accuracy) while maintaining high precision, though it still leaves room for improvement on certain violation categories. These results demonstrate what becomes achievable once policy-relevant world state is made available to the enforcement layer.