Permutation-Consensus Listwise Judging for Robust Factuality Evaluation

arXiv cs.CL / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies candidate-order sensitivity as a key instability in listwise factuality evaluation when LLMs are used as judges to rank multiple answers.
  • It proposes PCFJudge, an inference-time method that reruns the same listwise factuality-first prompt over multiple permutations of the candidate set and aggregates scores, rankings, and uncertainty into a consensus.
  • Experiments on RewardBench 2 Factuality show PCFJudge can improve over direct judging by up to 7 absolute points.
  • Ablation studies indicate that most of the benefit comes from permutation consensus itself rather than adding more complex arbitration mechanisms.
  • The authors conclude that order-induced variance is a meaningful contributor to factuality-judging error and that averaging over nuisance presentation changes can make LLM evaluations more reliable.

Abstract

Large language models (LLMs) are now widely used as judges, yet their decisions can change under presentation choices that should be irrelevant. We study one such source of instability: candidate-order sensitivity in listwise factuality evaluation, where several answers can look similarly polished while differing sharply in hallucination risk. We introduce PCFJudge, an inference-time method that reruns the same factuality-first listwise prompt over multiple orderings of the same candidate set and aggregates the resulting scores, ranks, and uncertainty signals into a single consensus decision. On RewardBench 2 Factuality, PCFJudge improves over direct judging by up to 7 absolute points. Development ablations show that the dominant gain comes from permutation consensus itself rather than from heavier arbitration layers. These results suggest that a meaningful share of factuality-judging error arises from order instability, and that averaging over this nuisance variation is a simple and effective way to make LLM evaluation more reliable.