BenchGuard: Who Guards the Benchmarks? Automated Auditing of LLM Agent Benchmarks

arXiv cs.CL / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that many failures in LLM agent benchmark results stem from flawed benchmarks—such as broken specs, hidden assumptions, or overly rigid evaluation scripts—rather than from the agents themselves.
  • It introduces BenchGuard, an automated auditing framework that uses frontier LLMs to cross-verify benchmark artifacts via structured LLM protocols and can use agent solutions or execution traces for diagnostics.
  • BenchGuard found 12 author-confirmed issues in ScienceAgentBench, including fatal specification errors that made some tasks unsolvable, demonstrating the method’s ability to catch severe benchmark defects.
  • On BIXBench Verified-50, BenchGuard matched 83.3% of expert-identified issues, including errors missed by prior human review, suggesting it can complement human auditing effectively.
  • The authors report that auditing 50 complex bioinformatics tasks costs under USD 15, positioning automated benchmark auditing as a practical complement to manual review and enabling AI-assisted benchmark development.

Abstract

As benchmarks grow in complexity, many apparent agent failures are not failures of the agent at all - they are failures of the benchmark itself: broken specifications, implicit assumptions, and rigid evaluation scripts that penalize valid alternative approaches. We propose employing frontier LLMs as systematic auditors of evaluation infrastructure, and realize this vision through BenchGuard, the first automated auditing framework for task-oriented, execution-based agent benchmarks. BenchGuard cross-verifies all benchmark artifacts via structured LLM protocols, optionally incorporating agent solutions or execution traces as additional diagnostic evidence. Deployed on two prominent scientific benchmarks, BenchGuard identified 12 author-confirmed issues in ScienceAgentBench - including fatal errors rendering tasks unsolvable - and exactly matched 83.3% of expert-identified issues on the BIXBench Verified-50 subset, catching defects that prior human review missed entirely. A full audit of 50 complex bioinformatics tasks costs under USD 15, making automated benchmark auditing a practical and valuable complement to human review. These findings point toward AI-assisted benchmark development, where frontier models serve not only as subjects of evaluation but as active participants in validating the evaluation infrastructure itself.