| How? It kept getting chained bash commands wrong, with wrong escapes. So it created many bad directories, and tried "fixing" its mistake. It offered to run a large bash command, with I'm glad I push everything often. But the disruption is massive. FAQ:
[link] [comments] |
One bash permission slipped...
Reddit r/LocalLLaMA / 5/4/2026
💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsTools & Practical Usage
Key Points
- An incident reported on Reddit describes how an LLM-driven workflow generated chained bash commands with incorrect escaping, resulting in the creation of many unintended directories.
- The user says the system then attempted to “fix” the mistake, while also proposing to run a large bash command containing a destructive `rm -rf`, which the user failed to notice.
- The issue occurred in an isolated Proxmox VM used for coding with LLMs rather than the user’s personal computer.
- The author emphasizes that frequent commits helped, but the disruption and fallout were still described as massive.
- Overall, the post highlights the risk of LLMs producing unsafe shell commands, especially when user review is incomplete.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat USA
AI Business

Building a daily AI news brief in 325 lines of Python
Dev.to

VS Code Quietly Reversed Its Copilot Co-Author Default — and the Dev Community Noticed
Dev.to

A Developer’s Guide to Systematic Prompting: Mastering Negative Constraints, Structured JSON Outputs, and Multi-Hypothesis Verbalized Sampling
MarkTechPost

AI Agents Explained: What They Are and How to Build One in 2026
Dev.to