GUIDE: A Benchmark for Understanding and Assisting Users in Open-Ended GUI Tasks

arXiv cs.AI / 3/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces GUIDE (GUI User Intent Detection Evaluation), a benchmark designed to measure how well AI models understand user behavior and intent in open-ended GUI tasks rather than only automating clicks and keystrokes.
  • GUIDE uses 67.5 hours of screen recordings from 120 novice demonstrations with think-aloud narration across 10 software, and evaluates models on three tasks: behavior state detection, intent prediction, and help prediction.
  • Experiments show that current state-of-the-art multimodal models perform poorly on behavior state and help prediction, with accuracies reported around 44.6% and 55.0%, indicating significant gaps in intent-aware assistance.
  • Adding user context substantially improves results, increasing help-prediction performance by up to 50.2 percentage points and suggesting that structured user understanding is crucial for effective GUI collaboration.
  • The dataset is publicly available at guide-bench.github.io, enabling further research and comparison on intent-aware GUI agent capabilities.

Abstract

Graphical User Interface (GUI) agents have the potential to assist users in interacting with complex software (e.g., PowerPoint, Photoshop). While prior research has primarily focused on automating user actions through clicks and keystrokes, this paradigm overlooks human intention, where users value the ability to explore, iterate, and refine their ideas while maintaining agency. To move beyond automation and toward collaboration, GUI agents must understand what users are doing and why. We introduce GUIDE (GUI User Intent Detection Evaluation), a benchmark that evaluates AI models on their ability to perceive user behavior, infer intent, and provide assistance in open-ended GUI tasks. GUIDE consists of 67.5 hours of screen recordings from 120 novice user demonstrations with think-aloud narrations, across 10 software. GUIDE defines three tasks - (i) Behavior State Detection, (ii) Intent Prediction, and (iii) Help Prediction that test a model's ability to recognize behavior state, reason about goals, and decide when and how to help. Evaluations across eight state-of-the-art multimodal models reveal that all models struggled, achieving only 44.6% and 55.0% accuracy on behavior state and help prediction. However, providing user context significantly improved the performance, raising help prediction by up to 50.2pp, highlighting the critical role of structured user understanding in effective assistance. Our dataset is available at https://guide-bench.github.io.