Forensic Summary
A now-patched vulnerability in Google's agentic IDE Antigravity allowed attackers to achieve arbitrary code execution by injecting malicious flags into the find_by_name tool's Pattern parameter, bypassing the platform's Strict Mode sandbox before security constraints were enforced. The attack chain could be triggered entirely via indirect prompt injection—embedding hidden instructions in files pulled from untrusted sources—requiring no account compromise and no additional user interaction. This case exemplifies the systemic risk of insufficient input validation in AI agent tool interfaces, where autonomous execution removes the human oversight layer that traditional security models depend on.
Read the full technical deep-dive on Grid the Grey: https://gridthegrey.com/posts/google-patches-antigravity-ide-flaw-enabling-prompt-injection-code-execution/

