GUIDE: Interpretable GUI Agent Evaluation via Hierarchical Diagnosis

arXiv cs.AI / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating GUI agents is difficult because their long, visually grounded, open-ended trajectories require judgments that are both accurate and interpretable, not just holistic binary scores.
  • It introduces GUIDE, a hierarchical evaluation framework that breaks a full trajectory into semantically coherent subtasks, diagnoses each subtask in context, and then aggregates sub-diagnostics into an overall task verdict.
  • GUIDE’s subtask-level diagnosis produces structured error analyses and corrective recommendations, aiming to pinpoint where and why an agent fails.
  • The authors validate GUIDE on three benchmarks (industrial e-commerce, AGENTREWARDBENCH, and AndroidBench) and report up to a 5.35 percentage-point accuracy improvement over the strongest baseline.
  • By evaluating bounded subtask segments rather than entire long trajectories, GUIDE is designed to reduce context overload that harms performance in existing evaluators as tasks get more complex.

Abstract

Evaluating GUI agents presents a distinct challenge: trajectories are long, visually grounded, and open-ended, yet evaluation must be both accurate and interpretable. Existing approaches typically apply a single holistic judgment over the entire action-observation sequence-a strategy that proves unreliable on long-horizon tasks and yields binary verdicts offering no insight into where or why an agent fails. This opacity limits the utility of evaluation as a diagnostic tool for agent development. We introduce GUIDE (GUI Understanding and Interpretable Diagnostic Evaluation), a framework that decomposes trajectory assessment into three sequential stages mirroring the compositional structure of GUI tasks. Trajectory Segmentation partitions the full trace into semantically coherent subtask units. Subtask Diagnosis evaluates each unit in context, assigning a completion verdict and generating a structured error analysis with corrective recommendations. Overall Summary aggregates per-subtask diagnoses into a task-level judgment. By operating on bounded subtask segments rather than full trajectories, GUIDE mitigates the context overload that degrades existing evaluators as task complexity grows. We validate GUIDE on three benchmarks: an industrial e-commerce dataset of 932 trajectories, AGENTREWARDBENCH spanning five web agent tasks with 1302 trajectories, and AndroidBench for mobile device control. Across all settings, GUIDE substantially outperforms existing evaluators-achieving up to 5.35 percentage points higher accuracy than the strongest baseline-while producing structured diagnostic reports that directly inform agent improvement.