FUSE: Ensembling Verifiers with Zero Labeled Data

arXiv stat.ML / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces FUSE (Fully Unsupervised Score Ensembling), which improves LLM output verification by ensembling multiple verifiers without using any ground-truth correctness labels.
  • FUSE works by controlling conditional dependencies among verifiers, aiming to boost the unsupervised performance of spectral-ensemble-style methods from the verification/ensembling literature.
  • Experiments show FUSE can match or outperform semi-supervised alternatives in test-time scaling setups across varied generator models, verifier types, and benchmarks.
  • Validation spans both established academic benchmarks (e.g., GPQA Diamond) and more challenging frontier-style, label-light evaluation sets such as Humanity’s Last Exam and IMO Shortlist questions.

Abstract

Verification of model outputs is rapidly emerging as a key primitive for both training and real-world deployment of large language models (LLMs). In practice, this often involves using imperfect LLM judges and reward models since ground truth acquisition can be time-consuming and expensive. We introduce Fully Unsupervised Score Ensembling (FUSE), a method for improving verification quality by ensembling verifiers without access to ground truth correctness labels. The key idea behind FUSE is to control conditional dependencies between verifiers in a manner that improves the unsupervised performance of a class of spectral algorithms from the ensembling literature. Despite requiring zero ground truth labels, FUSE typically matches or improves upon semi-supervised alternatives in test-time scaling experiments with diverse sets of generator models, verifiers, and benchmarks. In particular, we validate our method on both conventional academic benchmarks such as GPQA Diamond and on frontier, unsaturated benchmarks such as Humanity's Last Exam and IMO Shortlist questions.