Forensic Summary
Google has patched a critical prompt injection vulnerability in an agentic AI tool designed for filesystem operations, where insufficient input sanitisation enabled sandbox escape and arbitrary code execution. The flaw highlights the compounding risk surface of agentic AI systems that interface directly with operating system resources. This is a significant example of how LLM-native vulnerabilities can translate into traditional high-severity RCE outcomes.
Read the full technical deep-dive on Grid the Grey: https://gridthegrey.com/posts/google-fixes-critical-rce-flaw-in-ai-based-antigravity-tool/



