Built an open-source runtime layer to stop AI agents before they overspend or take risky actions — looking for feedback

Reddit r/artificial / 5/3/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The author describes a common control problem with AI agents: once they start using tools and external systems, it can be difficult to prevent runaway behavior or risky side effects.
  • They argue that existing permission checks and rate limits don’t address failure modes like retry loops, branching/fan-out, unexpected expensive model usage, excessive emails, or actions continuing after things go off track.
  • The project, Cycles, is introduced as an open-source “runtime authority layer” that checks whether an agent action is still within budget/policy before it executes.
  • Cycles follows a reserve → execute → commit pattern, reserving allowance, running the action, and then committing what actually happened, blocking actions that exceed limits.
  • The author asks for community feedback on current practices for gating agent actions and whether the reserve/execute/commit approach is worth the infrastructure for real systems.

If you’re experimenting with AI agents, you’ve probably run into this problem: once an agent starts calling tools, APIs, models, email systems, databases, or jobs, it can become hard to control what happens next.

Permissions answer: “Can this agent use this tool at all?”

Rate limits answer: “How fast can it call it?”

But agents fail in a different way. They retry, loop, fan out, call expensive models, send too many emails, trigger jobs or keep acting after the run has already gone off track.

I built Cycles to tackle this problem.

It’s an open-source runtime authority layer for AI agents. Before an agent takes a costly or risky action, Cycles checks whether that action is still within the allowed budget or policy. If yes, it reserves the allowance, the action runs, and then the agent commits what actually happened. If not, the action is blocked before execution.

The goal is to make agent execution safer under:

  • runaway retry loops
  • unexpected model/API spend
  • multi-step agent workflows
  • concurrent agents sharing the same budget
  • per-user / per-tenant limits
  • risky actions like emails, DB writes, API calls, or job triggers

This is not meant to replace observability or tracing. Those are still useful. The gap is the moment before execution, not after the bill or side effect already happened.

Repo: https://github.com/runcycles

Curious how others here are handling this today:

Do you gate agent actions before execution?
Do you rely mostly on logs / alerts after the fact?

Would a reserve → execute → commit model be useful in real agent systems, or does it feel like too much infrastructure too early?

submitted by /u/jkoolcloud
[link] [comments]