On the Reliability of Computer Use Agents

arXiv cs.AI / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Computer-use agents can perform well on web navigation and desktop/software tasks, but they may still fail on repeated runs of the same task.
  • The paper investigates why this unreliability occurs by examining stochastic execution, ambiguity in task specifications, and variability in agent behavior.
  • Using OSWorld with repeated executions and statistical tests that detect task-level changes across settings, the authors find that reliability depends on both task specification quality and behavioral variation between runs.
  • The work recommends evaluating computer-use agents under repeated execution, enabling agents to resolve ambiguities through interaction, and using strategies that stay stable across runs.

Abstract

Computer-use agents have rapidly improved on real-world tasks such as web navigation, desktop automation, and software interaction, in some cases surpassing human performance. Yet even when the task and model are unchanged, an agent that succeeds once may fail on a repeated execution of the same task. This raises a fundamental question: if an agent can succeed at a task once, what prevents it from doing so reliably? In this work, we study the sources of unreliability in computer-use agents through three factors: stochasticity during execution, ambiguity in task specification, and variability in agent behavior. We analyze these factors on OSWorld using repeated executions of the same task together with paired statistical tests that capture task-level changes across settings. Our analysis shows that reliability depends on both how tasks are specified and how agent behavior varies across executions. These findings suggest the need to evaluate agents under repeated execution, to allow agents to resolve task ambiguity through interaction, and to favor strategies that remain stable across runs.