We built an open-source proxy that enforces LLM agent rules at the API layer - 700 GitHub stars

Reddit r/artificial / 4/26/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The post argues that prompt-based guardrails for LLM agents are unreliable, especially as context grows or agents perform multi-step chains.
  • It introduces “Caliber,” an open-source, provider-agnostic proxy that reads rule definitions from plain Markdown and enforces them at the API layer for every call.
  • The authors report strong early community traction, citing 700 GitHub stars and nearly 100 forks.
  • The project invites feedback, feature requests, and contributions from people building AI agents.
  • The core idea is to shift safety/control logic from the prompt into an external enforcement layer to improve consistency.

Cross-posting here because this problem affects everyone building with AI agents.

Prompt-based guardrails fail. The model follows your system prompt in a demo, then ignores rules when context gets big or the agent chains multiple steps.

We built Caliber - an open-source proxy that reads your rules from plain markdown and enforces them at the API layer, not in the prompt. Every call. Provider-agnostic.

Just hit 700 GitHub stars ⭐ and nearly 100 forks - the reception from devs building with AI has been amazing.

Repo: https://github.com/caliber-ai-org/ai-setup

Would love:

- Feedback on the approach

- Feature requests from people building AI agents

- Anyone who wants to contribute to the project

Building this open-source for the community.

submitted by /u/Substantial-Cost-429
[link] [comments]