OpenKedge: Governing Agentic Mutation with Execution-Bound Safety and Evidence Chains
arXiv cs.AI / 4/13/2026
📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that autonomous agents can be unsafe when API-triggered mutations happen directly from probabilistic decisions without adequate context, coordination, or guarantees.
- It proposes OpenKedge, a protocol that turns mutations into a governed workflow by requiring declarative intent proposals that are checked against deterministically derived state, temporal signals, and policy constraints before any execution.
- OpenKedge compiles approved intents into execution contracts that strictly limit actions, resource scope, and time, enforced through ephemeral task-oriented identities for stronger execution-bound safety.
- A key contribution is an Intent-to-Execution Evidence Chain (IEEC) that cryptographically links intent, context, policy decisions, execution bounds, and outcomes to enable deterministic auditability and reasoning.
- Evaluations in multi-agent conflict and cloud infrastructure mutation scenarios suggest the protocol can deterministically arbitrate competing intents while containing unsafe executions without sacrificing throughput.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Build LLM Guardrails in 3 Lines of Python (No API Key, No Cloud)
Dev.to

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to