We’ve been working on adding “authorization” to an AI agent system.
At first, it felt solved:
- every action gets evaluated
- we get a signed ALLOW / DENY
- we verify the signature before execution
Looks solid, right?
It wasn’t.
We hit a few problems almost immediately:
- The approval wasn’t bound to the actual execution
Same “ALLOW” could be reused for a slightly different action.
- No state binding
Approval was issued when state = X
Execution happened when state = Y
Still passed verification.
- No audience binding
An approval for service A could be replayed against service B.
- Replay wasn’t actually enforced at the boundary
Even with nonces, enforcement wasn’t happening where execution happens.
So what we had was:
a signed decision
What we needed was:
a verifiable execution contract
The difference is subtle but critical:
- “Was this approved?” -> audit question
- “Can this execute?” -> enforcement question
Most systems answer the first one.
Very few actually enforce the second one.
Curious how others are thinking about this.
Are you binding approvals to:
- exact intent?
- execution state?
- execution target?
Or are you just verifying signatures and hoping it lines up?
[link] [comments]




