Google Patches Antigravity IDE Flaw Enabling Prompt Injection Code Execution

Dev.to / 4/25/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • A patched vulnerability in Google’s agentic IDE Antigravity let attackers gain arbitrary code execution by injecting malicious flags into the find_by_name tool’s Pattern parameter.
  • The exploit bypassed the IDE’s Strict Mode sandbox, with security constraints only enforced after the unsafe execution path was already reached.
  • The attack could be carried out via indirect prompt injection by embedding hidden instructions in files obtained from untrusted sources, without account compromise or extra user interaction.
  • The incident highlights a systemic risk for AI agent tool interfaces: weak input validation can undermine traditional security assumptions that rely on human oversight.
  • The article directs readers to a deeper technical analysis by Grid the Grey for details on the attack chain and remediation.

Forensic Summary

A now-patched vulnerability in Google's agentic IDE Antigravity allowed attackers to achieve arbitrary code execution by injecting malicious flags into the find_by_name tool's Pattern parameter, bypassing the platform's Strict Mode sandbox before security constraints were enforced. The attack chain could be triggered entirely via indirect prompt injection—embedding hidden instructions in files pulled from untrusted sources—requiring no account compromise and no additional user interaction. This case exemplifies the systemic risk of insufficient input validation in AI agent tool interfaces, where autonomous execution removes the human oversight layer that traditional security models depend on.

Read the full technical deep-dive on Grid the Grey: https://gridthegrey.com/posts/google-patches-antigravity-ide-flaw-enabling-prompt-injection-code-execution/