[P] AgentGuard – a policy engine + proxy to control what AI agents are allowed to do

Reddit r/MachineLearning / 3/25/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage

Key Points

  • The article introduces AictionGuard (AgentGuard), a policy-engine plus proxy designed to sit between AI agents and their tools to enforce rules before actions execute.
  • It provides examples of controls such as blocking destructive shell commands (e.g., rm -rf *), requiring approval for privileged operations like sudo, and gating sensitive production API calls.
  • The project emphasizes auditable governance by logging every agent action along with the reasoning and policy decision.
  • Configuration is intended to be driven by a YAML policy file, and the author reports that the core policy engine and an HTTP proxy are implemented, with Python/TypeScript SDKs working.
  • The release is described as early-stage (alpha) with noted gaps like lack of persistent storage and incomplete wiring, and the author seeks feedback on architecture and policy rule ideas.

I’ve been seeing a trend where AI agents are getting more and more autonomy, running shell commands, calling APIs, even handling sensitive operations.

But most setups I’ve seen have basically no enforcement layer. It’s just “hope the agent behaves.”

So I built a project called AictionGuard.

It sits between the agent and the tools and enforces a policy before anything executes.

Some examples:

  • Block commands like rm -rf * before they run
  • Require approval for things like sudo or production API calls
  • Log every action with reasoning + decision (audit trail)
  • Define everything in a YAML policy file

Right now it’s early (alpha), but:

  • Core policy engine is working
  • HTTP proxy is implemented
  • Python + TypeScript SDKs work

There are still gaps (no persistent DB, some features not wired yet), but the foundation is there, and I'm still working on the gaps, since i built the readme before the project itself.

I’d really appreciate:

  • Feedback on the architecture
  • Ideas for policy rules
  • Contributors interested in AI safety / infra

Repo:
https://github.com/Caua-ferraz/AictionGuard

Curious, if you’re building or using agents, what’s the #1 thing you’d want to restrict or monitor?

submitted by /u/SpecificNo7869
[link] [comments]