AI Navigate

When LLM Judge Scores Look Good but Best-of-N Decisions Fail

arXiv cs.AI / 3/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Large language models are used as judges to score candidate responses, but relying on global metrics alone can be misleading for best-of-n selection tasks within prompts.
  • In a 5,000-prompt best-of-4 benchmark, a judge with moderate global correlation (r = 0.47) captures only about 21% of the potential improvement from perfect selection over random choice.
  • The shortfall arises because global agreement is driven by prompt-level baseline effects, while effective selection depends on within-prompt ranking (within-prompt correlation r_within ≈ 0.27) and a high rate of ties in pairwise comparisons (≈67%).
  • Using explicit pairwise judging in matched-pair best-of-2 audits recovers much of the lost signal, increasing recovery from ~21% to ~61%, and suggesting audits should report within-prompt signal, tie rates, and top-1 recovery rather than global agreement alone.

Abstract

Large language models are often used as judges to score candidate responses, then validated with a single global metric such as correlation with reference labels. This can be misleading when the real deployment task is best-of-n selection within a prompt. In a 5,000-prompt best-of-4 benchmark from Chatbot Arena, a judge with moderate global correlation (r = 0.47) captures only 21.0% of the improvement that perfect selection would achieve over random choice. The gap arises because global agreement is driven largely by prompt-level baseline effects, while selection depends on within-prompt ranking: within-prompt correlation is only r_within = 0.27, and coarse pointwise scoring creates ties in 67% of pairwise comparisons. In a matched-pair best-of-2 audit, explicit pairwise judging recovers much of this lost signal, raising recovery from 21.1% to 61.2%. For judge-based selection, the relevant audit should report within-prompt signal, tie rates, and recovery/top-1 accuracy, not global agreement alone.