| when an agent runs incorrect bash, the hook of the package detects it and wraps the bash error with a line to roast the agent. It makes me less mad to see my agents hallucinate and make mistakes when they get roasted. check it out here: [link] [comments] |
i made a package that mocks your coding agent when they get it wrong.
Reddit r/LocalLLaMA / 3/27/2026
💬 OpinionDeveloper Stack & InfrastructureTools & Practical Usage
Key Points
- The author shares a package (“dont-hallucinate”) that detects incorrect bash execution by a coding agent and reformats the resulting error message.
- When an agent makes a mistake, the hook wraps the bash error with an additional “roast” line to make hallucinations less frustrating.
- The package is distributed on both npm and PyPI, making it straightforward to try in different development stacks.
- The concept reframes debugging feedback into more engaging messaging while still surfacing the underlying command error.
- Users are directed to the project pages for installation and usage details.
Related Articles

GDPR and AI Training Data: What You Need to Know Before Training on Personal Data
Dev.to

We built a 9-item checklist that catches LLM coding agent failures before execution starts
Dev.to
Edge-to-Cloud Swarm Coordination for heritage language revitalization programs with embodied agent feedback loops
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

How to Build an Automated SEO Workflow with AI: Lessons Learned from Developing SEONIB
Dev.to