Measure Twice, Click Once: Co-evolving Proposer and Visual Critic via Reinforcement Learning for GUI Grounding

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses GUI grounding—mapping natural-language instructions to exact pixel coordinates—where current models often miss precise localization despite understanding semantic intent.
  • Instead of relying on static self-consistency methods (e.g., geometric clustering or higher Pass@k sampling), it introduces a learnable selection mechanism that picks the best target by having a visual critic critique the model’s rendered proposals.
  • It proposes a co-evolving “Propose-then-Critic” framework, combining proposer and critic training in a mutually reinforcing loop to handle the mismatch between their capabilities.
  • The training method uses maturity-aware adaptive co-evolutionary reinforcement learning to dynamically balance proposer/critic objectives, improving both spatial exploration and critic discrimination.
  • Experiments across six benchmarks show significant gains in grounding accuracy and in the critic’s reliability.

Abstract

Graphical User Interface (GUI) grounding requires mapping natural language instructions to precise pixel coordinates. However, due to visually homogeneous elements and dense layouts, models typically grasp semantic intent yet struggle with achieving precise localization. While scaling sampling attempts (Pass@k) reveals potential gains, static self-consistency strategies derived from geometric clustering often yield limited improvements, as the model's predictions tend to be spatially dispersed. In this paper, we propose replacing static consistency strategies with a learnable selection mechanism that selects the optimal target by critiquing its own proposals rendered on the screenshot. Given the significant disparity between the model's grounding and critiquing capabilities, we propose a co-evolving Propose-then-Critic framework. To jointly optimize these, we introduce a maturity-aware adaptive co-evolutionary reinforcement learning paradigm. This approach dynamically balances the training objectives of proposer and critic, where the diversity of the proposer's outputs enhances critic robustness, while the critic's maturing discrimination capability conversely unlocks the proposer's potential for extensive spatial exploration, fostering the mutual reinforcement and co-evolution of both capabilities, thereby ensuring generalizability to adapt to diverse and complex interface layouts. Extensive experiments over 6 benchmarks show that our method significantly enhances both grounding accuracy and critic reliability.