From Governance Norms to Enforceable Controls: A Layered Translation Method for Runtime Guardrails in Agentic AI
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that agentic AI creates distinct governance risks that arise during multi-step execution, requiring runtime guardrails rather than relying only on development-time or deployment-time safeguards.
- It proposes a “layered translation method” that maps governance standards (e.g., ISO/IEC 42001 and NIST AI RMF) into four control layers: governance objectives, design-time constraints, runtime mediation, and assurance feedback.
- The method clarifies the relationships between governance objectives, technical controls, runtime guardrails, and the assurance evidence needed for audits and accountability.
- It introduces a control tuple and a runtime-enforceability rubric to decide which controls are suitable for enforcement during execution (i.e., when they are observable, determinate, and sufficiently time-sensitive).
- The approach is demonstrated via a procurement-agent case study, showing how standards can inform where controls should live across architecture, runtime policy, escalation, and audit.
Related Articles

Black Hat Asia
AI Business
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning
ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog
Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to
Every AI Agent Registry in 2026, Compared
Dev.to