AI Navigate

Mediocrity is the key for LLM as a Judge Anchor Selection

arXiv cs.CL / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper systematically evaluates 22 anchors on the Arena-Hard-v2.0 dataset and finds anchor choice critically affects the reliability of model rankings compared to human judgments.
  • It shows that common anchors, such as best-performing or worst-performing models, are poor anchors because they are extreme and fail to reflect the relative ordering of most models.
  • The study finds that the effect size of anchor selection is comparable to the effect of selecting the judge model, underscoring its importance in benchmark design.
  • A power analysis demonstrates that standard benchmark sizes are insufficient for reliable pairwise evaluation and cannot reliably distinguish between competitive models.
  • The authors provide actionable recommendations, including guidelines for selecting informative anchors and ensuring benchmark sizes are sufficient for reliable and efficient evaluation.

Abstract

The ``LLM-as-a-judge'' paradigm has become a standard method for evaluating open-ended generation. To address the quadratic scalability costs of pairwise comparisons, popular benchmarks like Arena-Hard and AlpacaEval compare all models against a single anchor. However, despite its widespread use, the impact of anchor selection on the reliability of the results remains largely unexplored. In this work, we systematically investigate the effect of anchor selection by evaluating 22 different anchors on the Arena-Hard-v2.0 dataset. We find that the choice of anchor is critical: a poor anchor can dramatically reduce correlation with human rankings. We identify that common anchor choices (best-performing and worst-performing models) make poor anchors. Because these extreme anchors are consistently better or worse than all other models, they are seldom indicative of the relative ranking of the models. We further quantify the effect size of anchor selection, showing it is comparable to the selection of a judge model. We conclude with actionable recommendations. First, we conduct a power analysis, and compute sufficient benchmark sizes for anchor-based evaluation, finding that standard benchmark sizes are insufficient for pairwise evaluation and fail to distinguish between competitive models reliably. Second, we provide guidelines for selecting informative anchors to ensure reliable and efficient evaluation practices.