ACES: Who Tests the Tests? Leave-One-Out AUC Consistency for Code Generation

arXiv cs.LG / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLM code selection using LLM-generated tests is difficult because tests may be wrong, creating a circular dependency between identifying correct code and validating test correctness.
  • It proposes that instead of estimating whether each test is correct, tests should be used for their ability to *separate* correct from incorrect code, captured via a leave-one-out ranking consistency metric.
  • The method introduces LOO-AUC (leave-one-out AUC) and shows theoretically that expected LOO-AUC is proportional to each test’s true discriminative power.
  • It presents ACES with two variants: ACES-C using closed-form weighting under an assumption of average test quality, and ACES-O using iterative optimization of a differentiable LOO-AUC objective without that assumption.
  • The approach relies only on the binary pass matrix with negligible overhead and reports state-of-the-art Pass@$k$ improvements across multiple code generation benchmarks.

Abstract

Selecting LLM-generated code candidates using LLM-generated tests is challenging because the tests themselves may be incorrect. Existing methods either treat all tests equally or rely on ad-hoc heuristics to filter unreliable tests. Yet determining test correctness requires knowing which codes are correct, creating a \emph{circular dependency}. Our key insight is that we need not determine test correctness at all: \emph{test votes should rank, not merely count}. What matters is not how many codes pass a test, but whether the test can \emph{distinguish} correct from incorrect code. We break the circular dependency via leave-one-out evaluation: hold out one test, rank codes by their aggregate scores on all remaining tests, and measure whether the held-out test's pass/fail pattern agrees with this ranking. We formalize this agreement as the leave-one-out AUC~(LOO-AUC) and prove that the expected LOO-AUC is proportional to each test's ability to separate correct code from incorrect code. Building on this, we propose \textbf{ACES}~(\textbf{A}UC \textbf{C}onsist\textbf{E}ncy \textbf{S}coring) with two complementary variants: ACES-C provides closed-form weights that provably approximate the oracle in expectation under a mild assumption on average test quality; ACES-O drops this assumption and iteratively optimizes a differentiable LOO-AUC objective. Both operate solely on the binary pass matrix with negligible overhead, and achieve state-of-the-art Pass@k on multiple code generation benchmarks.