Runtime Governance for AI Agents: Policies on Paths
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that AI agents' non-deterministic, path-dependent behavior cannot be fully governed at design time and that runtime governance must balance task completion with legal, data-breach, reputational, and other costs.
- It formalizes compliance policies as deterministic functions that map agent identity, partial path, proposed next action, and organizational state to a policy violation probability.
- It shows that prompt-level instructions and static access control are special cases of this framework, illustrating how these controls influence or constrain agent paths.
- It argues that runtime evaluation of the path is the general approach needed for path-dependent policies, beyond static, design-time controls.
- It presents a formal governance framework, concrete policy examples inspired by the AI Act, discusses a reference implementation, and identifies open problems including risk calibration and the limits of enforced compliance.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA