Runtime Governance for AI Agents: Policies on Paths
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that AI agents' non-deterministic, path-dependent behavior cannot be fully governed at design time and that runtime governance must balance task completion with legal, data-breach, reputational, and other costs.
- It formalizes compliance policies as deterministic functions that map agent identity, partial path, proposed next action, and organizational state to a policy violation probability.
- It shows that prompt-level instructions and static access control are special cases of this framework, illustrating how these controls influence or constrain agent paths.
- It argues that runtime evaluation of the path is the general approach needed for path-dependent policies, beyond static, design-time controls.
- It presents a formal governance framework, concrete policy examples inspired by the AI Act, discusses a reference implementation, and identifies open problems including risk calibration and the limits of enforced compliance.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to
The Three-Agent Protocol Is Transferable. The Discipline Isn't.
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA