AI Navigate

AdaZoom-GUI: Adaptive Zoom-based GUI Grounding with Instruction Refinement

arXiv cs.CV / 3/19/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • AdaZoom-GUI introduces an adaptive zoom-based GUI grounding framework with an instruction refinement module that rewrites natural language commands into explicit descriptions to improve localization accuracy.
  • It uses a conditional second-stage zoom-in strategy to better localize small GUI elements while avoiding unnecessary computation and context loss on simpler cases.
  • The approach is supported by a high-quality GUI grounding dataset and trained with Group Relative Policy Optimization (GRPO) to predict both click coordinates and element bounding boxes.
  • Experiments show state-of-the-art performance among models with comparable or larger parameter counts, highlighting its effectiveness for high-resolution GUI understanding and practical GUI agent deployment.
  • The work has potential downstream impact on automated GUI interaction workflows across high-resolution interfaces and related applications.

Abstract

GUI grounding is a critical capability for vision-language models (VLMs) that enables automated interaction with graphical user interfaces by locating target elements from natural language instructions. However, grounding on GUI screenshots remains challenging due to high-resolution images, small UI elements, and ambiguous user instructions. In this work, we propose AdaZoom-GUI, an adaptive zoom-based GUI grounding framework that improves both localization accuracy and instruction understanding. Our approach introduces an instruction refinement module that rewrites natural language commands into explicit and detailed descriptions, allowing the grounding model to focus on precise element localization. In addition, we design a conditional zoom-in strategy that selectively performs a second-stage inference on predicted small elements, improving localization accuracy while avoiding unnecessary computation and context loss on simpler cases. To support this framework, we construct a high-quality GUI grounding dataset and train the grounding model using Group Relative Policy Optimization (GRPO), enabling the model to predict both click coordinates and element bounding boxes. Experiments on public benchmarks demonstrate that our method achieves state-of-the-art performance among models with comparable or even larger parameter sizes, highlighting its effectiveness for high-resolution GUI understanding and practical GUI agent deployment.