Are GUI Agents Focused Enough? Automated Distraction via Semantic-level UI Element Injection

arXiv cs.CL / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing red-teaming of GUI agents is limited because it often relies on white-box access, which is unrealistic for commercial systems.
  • It introduces a new threat model, Semantic-level UI Element Injection, which overlays safety-aligned, harmless-looking UI elements onto screenshots to misdirect an agent’s visual grounding.
  • Using a modular Editor-Overlapper-Victim pipeline and an iterative candidate-search strategy, the authors find optimized attacks can raise success rates by up to 4.4x versus random injection on the strongest tested victim models.
  • The attack shows transferability: elements optimized on one model work effectively on other victim models, suggesting model-agnostic vulnerabilities.
  • After an initial success, the injected element persists as an attractor, causing the victim to click it in over 15% of later trials versus under 1% for random injection, indicating a durable misalignment risk.

Abstract

Existing red-teaming studies on GUI agents have important limitations. Adversarial perturbations typically require white-box access, which is unavailable for commercial systems, while prompt injection is increasingly mitigated by stronger safety alignment. To study robustness under a more practical threat model, we propose Semantic-level UI Element Injection, a red-teaming setting that overlays safety-aligned and harmless UI elements onto screenshots to misdirect the agent's visual grounding. Our method uses a modular Editor-Overlapper-Victim pipeline and an iterative search procedure that samples multiple candidate edits, keeps the best cumulative overlay, and adapts future prompt strategies based on previous failures. Across five victim models, our optimized attacks improve attack success rate by up to 4.4x over random injection on the strongest victims. Moreover, elements optimized on one source model transfer effectively to other target models, indicating model-agnostic vulnerabilities. After the first successful attack, the victim still clicks the attacker-controlled element in more than 15% of later independent trials, versus below 1% for random injection, showing that the injected element acts as a persistent attractor rather than simple visual clutter.