Emergence WebVoyager: Toward Consistent and Transparent Evaluation of (Web) Agents in The Wild

arXiv cs.AI / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating AI web agents in real-world conditions is often unreliable due to issues like task-framing ambiguity and operational variability that undermine reproducibility and fair comparisons.
  • It audits the existing WebVoyager benchmark and identifies shortcomings that make it difficult to obtain consistent, context-aligned performance measurements.
  • To address this, the authors introduce “Emergence WebVoyager,” which standardizes how tasks are instantiated, how failures are handled, and how results are annotated and reported.
  • The benchmark standardization improves evaluation clarity, achieving 95.9% inter-annotator agreement, suggesting more reliable scoring and documentation.
  • Using the framework to evaluate OpenAI Operator, the study finds a 68.6% success rate across domains and task types, lower than OpenAI’s previously reported 87%, highlighting how methodology affects measured performance.

Abstract

Reliable evaluation of AI agents operating in complex, real-world environments requires methodologies that are robust, transparent, and contextually aligned with the tasks agents are intended to perform. This study identifies persistent shortcomings in existing AI agent evaluation practices that are particularly acute in web agent evaluation, as exemplified by our audit of WebVoyager, including task-framing ambiguity and operational variability that hinder meaningful and reproducible performance comparisons. To address these challenges, we introduce Emergence WebVoyager, an enhanced version of the WebVoyager benchmark that standardizes evaluation methodology through clear guidelines for task instantiation, failure handling, annotation, and reporting. Emergence WebVoyager achieves an inter-annotator agreement of 95.9\%, indicating improved clarity and reliability in both task formulation and evaluation. Applying this framework to evaluate OpenAI Operator reveals substantial performance variation across domains and task types, with an overall success rate of 68.6\%, substantially lower than the 87\% previously reported by OpenAI, demonstrating the utility of our approach for more rigorous and comparable web agent evaluation.