Growing Pains: Extensible and Efficient LLM Benchmarking Via Fixed Parameter Calibration

arXiv cs.CL / 4/15/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces an extensible LLM benchmarking framework that uses multidimensional Item Response Theory (IRT) with anchor items to calibrate newly added benchmarks against a fixed evaluation suite.
  • It addresses the comparability problem caused by evaluating different models on different datasets or samples by holding previously calibrated item parameters fixed and using a fixed anchor set per dataset.
  • The method supports realistic “datasets arrive over time” evaluation, enabling direct comparisons across evaluation periods even when models are tested only on datasets available at the time.
  • Experiments across 400+ LLMs show the framework can predict full benchmark performance within 2–3 percentage points using about 100 anchor questions per dataset, while preserving rankings (Spearman ρ ≥ 0.9).
  • The authors provide code to implement the approach, positioning it as a way to extend benchmark suites with constant evaluation cost per newly added dataset.

Abstract

The rapid release of both language models and benchmarks makes it increasingly costly to evaluate every model on every dataset. In practice, models are often evaluated on different samples, making scores difficult to compare across studies. To address this, we propose a framework based on multidimensional Item Response Theory (IRT) that uses anchor items to calibrate new benchmarks to the evaluation suite while holding previously calibrated item parameters fixed. Our approach supports a realistic evaluation setting in which datasets are introduced over time and models are evaluated only on the datasets available at the time of evaluation, while a fixed anchor set for each dataset is used so that results from different evaluation periods can be compared directly. In large-scale experiments on more than 400 models, our framework predicts full-evaluation performance within 2-3 percentage points using only 100 anchor questions per dataset, with Spearman \rho \geq 0.9 for ranking preservation, showing that it is possible to extend benchmark suites over time while preserving score comparability, at a constant evaluation cost per new dataset. Code available at https://github.com/eliyahabba/growing-pains