AlphaEval: Evaluating Agents in Production

arXiv cs.CL / 4/15/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that current agent benchmarks fail to reflect production realities, such as implicit constraints, heterogeneous multi-modal inputs, long-horizon deliverables, and evolving expert judgments.
  • It introduces AlphaEval, a production-grounded benchmark with 94 tasks drawn from seven companies and spanning six O*NET domains, designed to evaluate full agent products (e.g., Claude Code, Codex) rather than model-only capabilities.
  • AlphaEval’s evaluation framework combines multiple paradigms—including LLM-as-a-Judge, reference-driven metrics, formal verification, rubric-based assessment, and automated UI testing—structured within each domain.
  • The work also proposes a requirement-to-benchmark construction framework that systematically converts authentic production requirements into executable evaluation tasks in minimal time for reproducibility and reuse.

Abstract

The rapid deployment of AI agents in commercial settings has outpaced the development of evaluation methodologies that reflect production realities. Existing benchmarks measure agent capabilities through retrospectively curated tasks with well-specified requirements and deterministic metrics -- conditions that diverge fundamentally from production environments where requirements contain implicit constraints, inputs are heterogeneous multi-modal documents with information fragmented across sources, tasks demand undeclared domain expertise, outputs are long-horizon professional deliverables, and success is judged by domain experts whose standards evolve over time. We present AlphaEval, a production-grounded benchmark of 94 tasks sourced from seven companies deploying AI agents in their core business, spanning six O*NET (Occupational Information Network) domains. Unlike model-centric benchmarks, AlphaEval evaluates complete agent products -- Claude Code, Codex, etc. -- as commercial systems, capturing performance variations invisible to model-level evaluation. Our evaluation framework covers multiple paradigms (LLM-as-a-Judge, reference-driven metrics, formal verification, rubric-based assessment, automated UI testing, etc.), with individual domains composing multiple paradigms. Beyond the benchmark itself, we contribute a requirement-to-benchmark construction framework -- a systematic methodology that transforms authentic production requirements into executable evaluation tasks in minimal time. This framework standardizes the entire pipeline from requirement to evaluation, providing a reproducible, modular process that any organization can adopt to construct production-grounded benchmarks for their own domains.