LLMs Judging LLMs: A Simplex Perspective

arXiv stat.ML / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies automatic evaluation of free-form LLM outputs using LLMs as “judges,” arguing that judge-only approaches typically model only aleatoric (sampling) uncertainty while ignoring epistemic (judge-quality) uncertainty.
  • It introduces a geometric interpretation of $M$-level scoring where both judge outputs and candidate outputs can be mapped onto points in an $(M-1)$-dimensional probability simplex, enabling triangle-area-style correspondences to ranking concepts.
  • The simplex framework provides theoretical conditions and proofs for when rankings are identifiable, including formal support for the idea that pairwise/two-level scoring ($M=2$) tends to work better with LLM judges than multi-level scoring ($M>2$).
  • The authors propose geometric Bayesian priors to explicitly represent epistemic uncertainty about judge quality and perform sensitivity analyses by varying those priors.
  • Experiments on LLM benchmarks show that judge-only rankings are often robust but fail on some datasets, and the Bayesian uncertainty-aware method delivers substantially higher coverage than existing procedures.

Abstract

Given the challenge of automatically evaluating free-form outputs from large language models (LLMs), an increasingly common solution is to use LLMs themselves as the judging mechanism, without any gold-standard scores. Implicitly, this practice accounts for only sampling variability (aleatoric uncertainty) and ignores uncertainty about judge quality (epistemic uncertainty). While this is justified if judges are perfectly accurate, it is unclear when such an approach is theoretically valid and practically robust. We study these questions for the task of ranking LLM candidates from a novel geometric perspective: for M-level scoring systems, both LLM judges and candidates can be represented as points on an (M-1)-dimensional probability simplex, where geometric concepts (e.g., triangle areas) correspond to key ranking concepts. This perspective yields intuitive theoretical conditions and visual proofs for when rankings are identifiable; for instance, we provide a formal basis for the ``folk wisdom'' that LLM judges are more effective for two-level scoring (M=2) than multi-level scoring (M>2). Leveraging the simplex, we design geometric Bayesian priors that encode epistemic uncertainty about judge quality and vary the priors to conduct sensitivity analyses. Experiments on LLM benchmarks show that rankings based solely on LLM judges are robust in many but not all datasets, underscoring both their widespread success and the need for caution. Our Bayesian method achieves substantially higher coverage rates than existing procedures, highlighting the importance of modeling epistemic uncertainty.

LLMs Judging LLMs: A Simplex Perspective | AI Navigate