Step-level Optimization for Efficient Computer-use Agents

arXiv cs.AI / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that repeatedly invoking large multimodal models at every step is inefficient for long-horizon GUI (computer-use) tasks, where difficulty varies widely across steps.
  • It identifies two recurring failure modes in benchmarks—progress stalls (looping or ineffective actions) and silent semantic drift (locally plausible actions that deviate from the user’s true goal).
  • To improve efficiency and speed, the authors propose an event-driven, step-level cascade that runs a small policy by default and escalates to a stronger model only when risk monitors trigger.
  • The framework uses two modular monitors: a Stuck Monitor to detect degraded progress and a Milestone Monitor to verify semantically meaningful checkpoints to catch drift.
  • The approach is designed to be deployment-friendly, able to layer on top of existing computer-use agents without changing their architecture or retraining the large model.

Abstract

Computer-use agents provide a promising path toward general software automation because they can interact directly with arbitrary graphical user interfaces instead of relying on brittle, application-specific integrations. Despite recent advances in benchmark performance, strong computer-use agents remain expensive and slow in practice, since most systems invoke large multimodal models at nearly every interaction step. We argue that this uniform allocation of compute is fundamentally inefficient for long-horizon GUI tasks. Such trajectories are highly heterogeneous: many steps are routine and can be handled reliably by smaller, cheaper policies, while errors tend to concentrate at a relatively small number of high-risk moments. Across computer-use benchmarks, these failures repeatedly take two forms: progress stalls, where the agent loops, repeats ineffective actions, or fails to make meaningful progress, and silent semantic drift, where the agent continues taking locally plausible actions after already deviating from the user's true goal. To address this inefficiency, we propose an event-driven, step-level cascade for computer-use agents that runs a small policy by default and escalates to a stronger model only when lightweight learned monitors detect elevated risk. Our framework combines two complementary signals: a Stuck Monitor that detects degraded progress from recent reasoning-action history and triggers recovery, and a Milestone Monitor that identifies semantically meaningful checkpoints where sparse verification is most informative for catching drift. This design turns always-on frontier-model inference into adaptive, on-demand compute allocation over the course of an evolving interaction. The framework is modular and deployment-oriented: it can be layered on top of existing computer-use agents without changing the underlying agent architecture or retraining the large model.