Benchmarking and Improving GUI Agents in High-Dynamic Environments

arXiv cs.CV / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper argues that prior GUI agents mostly rely on single-screenshot decision-making, which can fail in high-dynamic interfaces by yielding partially or even unobservable decision processes.
  • It introduces DynamicGUIBench, an online benchmark covering 10 GUI applications with interaction scenarios where crucial interface elements change significantly between actions.
  • It proposes DynamicUI, an agent that uses screen-recording videos and leverages a dynamic perceiver to select salient frames, producing context-aware captions from clustered video segments.
  • DynamicUI further refines its internal reasoning using an action-conditioned filtering strategy to reduce thought-action inconsistencies and redundancy, and uses a reflection module to provide guidance for subsequent actions.
  • Experiments show DynamicUI substantially improves performance on the newly introduced dynamic benchmark while remaining competitive on other public GUI benchmarks.

Abstract

Recent advancements in Graphical User Interface (GUI) agents have predominantly focused on training paradigms like supervised fine-tuning (SFT) and reinforcement learning (RL). However, the challenge of high-dynamic GUI environments remains largely underexplored. Existing agents typically rely on a single screenshot after each action for decision-making, leading to a partially observable (or even unobservable) Markov decision process, where the key GUI state including important information for actions is often inadequately captured. To systematically explore this challenge, we introduce DynamicGUIBench, a comprehensive online GUI benchmark spanning ten applications and diverse interaction scenarios characterized by important interface changes between actions. Furthermore, we present DynamicUI, an agent designed for dynamic interfaces, which takes screen-recording videos of the interaction process as input and consists of three components: a dynamic perceiver, a refinement strategy, and a reflection. Specifically, the dynamic perceiver clusters frames of the GUI video, generates captions for the centroids, and iteratively selects the most informative frames as the salient dynamic context. Considering that there may be inconsistencies and noise between the selected frames and the textual context of the agent, the refinement strategy employs an action-conditioned filtering to refine thoughts to mitigate thought-action inconsistency and redundancy. Based on the refined agent trajectories, the reflection module provides effective and accurate guidance for further actions. Experiments on DynamicGUIBench demonstrate that DynamicUI significantly improves the performance in dynamic GUI environments, while maintaining competitive performance on other public benchmarks.