AI Navigate

BenchBrowser -- Collecting Evidence for Evaluating Benchmark Validity

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that high-level benchmark metadata is too coarse to verify whether benchmarks test the capabilities practitioners actually care about.
  • It introduces BenchBrowser, a retriever that surfaces evaluation items relevant to natural language use cases across more than 20 benchmark suites.
  • A human study validates BenchBrowser's retrieval precision, supporting its use for diagnosing content validity gaps and low convergent validity.
  • BenchBrowser provides a way to quantify and diagnose the gap between practitioner intent and what benchmarks actually test.

Abstract

Do language model benchmarks actually measure what practitioners intend them to ? High-level metadata is too coarse to convey the granular reality of benchmarks: a "poetry" benchmark may never test for haikus, while "instruction-following" benchmarks will often test for an arbitrary mix of skills. This opacity makes verifying alignment with practitioner goals a laborious process, risking an illusion of competence even when models fail on untested facets of user interests. We introduce BenchBrowser, a retriever that surfaces evaluation items relevant to natural language use cases over 20 benchmark suites. Validated by a human study confirming high retrieval precision, BenchBrowser generates evidence to help practitioners diagnose low content validity (narrow coverage of a capability's facets) and low convergent validity (lack of stable rankings when measuring the same capability). BenchBrowser, thus, helps quantify a critical gap between practitioner intent and what benchmarks actually test.