BenchBench: Benchmarking Automated Benchmark Generation

arXiv cs.CL / 3/24/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that LLM evaluation should measure not only answer quality but also how well models can design benchmarks, since static test sets can saturate, be contaminated, and are expensive to refresh.
  • It introduces BenchBench, a three-stage pipeline that extracts domain cards, uses multiple “designer” LLMs to generate quota-controlled benchmark suites, and validates items via a multi-model answerer panel with verifiers or rubric-based judging.
  • BenchBench generates 16.7K benchmark items across nine variants (computer science, mathematics, medicine, and theory-of-mind), retaining ~15K core items and producing ~152K graded model-to-item responses with item-level quality flags and psychometric diagnostics.
  • Results show benchmark-design ability has only a moderate correlation with answer-time strength (Spearman rho ~0.37), and invalidity is negatively associated with discrimination, enabling scalable audits of fidelity across format, modality, and language.

Abstract

Benchmarks are the de facto standard for tracking progress in large language models (LLMs), yet static test sets can rapidly saturate, become vulnerable to contamination, and are costly to refresh. Scalable evaluation of open-ended items often relies on LLM judges, introducing additional sources of bias and prompt sensitivity. We argue that evaluation must extend beyond how well models answer benchmarks to how well models design them. We introduce BenchBench, a three-stage pipeline and dataset for benchmarking automated benchmark generation: (i) extract structured domain cards from seed benchmarks, (ii) prompt multiple designer LLMs to generate quota-controlled suites, and (iii) validate items with a multi-model answerer panel using exact/numeric/symbolic verifiers when possible and rubric-guided judging otherwise, yielding designer--answerer matrices with item-level quality flags and psychometric diagnostics. Across nine variants spanning computer science, mathematics, medicine, and theory-of-mind reasoning (including multilingual and multimodal settings), we generate 16.7K items, retain ~15K core items post-filtering, and produce ~152K graded model--item responses. BenchBench shows that benchmark-design ability is only moderately correlated with answer-time strength (Spearman rho ~0.37), invalidity is negatively associated with discrimination (Pearson r~0.62), and the resulting designer--answerer matrices enable scalable audits of format/modality/language fidelity and suite-dependent self/family interactions. The project is available at: https://github.com/koanatakiyo/BenchBench.