Built a demo where an agent can provision 2 GPUs, then gets hard-blocked on the 3rd call

Reddit r/artificial / 4/9/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The author built a demo of an agent that can provision GPUs via a tool call, with a fixed budget and a per-call cost configured (1000 total budget; 500 per `provision_gpu(a100)` call).
  • The first two tool calls are allowed, but the third call is hard-blocked with `DENY` returning `BUDGET_EXCEEDED` before the tool executes.
  • The system also produces authorization-related artifacts including hash-chained audit events and a verification envelope, supported by strict offline verification (`verifyEnvelope() => ok`).
  • The demo is positioned as an execution-time authorization “missing layer” for side-effecting agents, emphasizing a pipeline of proposal → authorization → execution rather than agent → tool directly.
  • The post raises a practical question for practitioners: whether teams enforce authorization at execution time or rely mainly on approvals, retries, or sandboxing.
Built a demo where an agent can provision 2 GPUs, then gets hard-blocked on the 3rd call

Policy:

- budget = 1000

- each `provision_gpu(a100)` call = 500

Result:

- call 1 -> ALLOW

- call 2 -> ALLOW

- call 3 -> DENY (`BUDGET_EXCEEDED`)

Key point: the 3rd tool call is denied before execution. The tool never runs.

Also emits:

- authorization artifacts

- hash-chained audit events

- verification envelope

- strict offline verification: `verifyEnvelope() => ok`

Feels like this is the missing layer for side-effecting agents:

proposal -> authorization -> execution

rather than agent -> tool directly.

Are you doing execution-time authorization, or mostly relying on approvals / retries / sandboxing.

Happy to share the exact output / demo flow if useful.

submitted by /u/docybo
[link] [comments]