Evolutionary Search for Automated Design of Uncertainty Quantification Methods

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that uncertainty quantification (UQ) methods for large language models are often handcrafted with domain heuristics, which can limit scalability and generality.
  • It proposes an LLM-powered evolutionary search approach that automatically discovers unsupervised UQ methods, encoded as Python programs, rather than manually designing them.
  • On atomic claim verification, the evolved UQ methods outperform strong manually designed baselines by up to a 6.7% relative ROC-AUC improvement across nine datasets and maintain robust out-of-distribution generalization.
  • The authors find that different LLMs generate distinct evolutionary strategies, such as Claude favoring higher feature-count linear estimators while GPT-oss-120B tends toward simpler positional weighting schemes.
  • Results also suggest that increased method complexity does not always help—only Sonnet 4.5 and Opus 4.5 reliably benefit, while Opus 4.6 regresses—indicating nuanced interactions between model behavior and evolutionary search.

Abstract

Uncertainty quantification (UQ) methods for large language models are predominantly designed by hand based on domain knowledge and heuristics, limiting their scalability and generality. We apply LLM-powered evolutionary search to automatically discover unsupervised UQ methods represented as Python programs. On the task of atomic claim verification, our evolved methods outperform strong manually-designed baselines, achieving up to 6.7% relative ROC-AUC improvement across 9 datasets while generalizing robustly out-of-distribution. Qualitative analysis reveals that different LLMs employ qualitatively distinct evolutionary strategies: Claude models consistently design high-feature-count linear estimators, while Gpt-oss-120B gravitates toward simpler and more interpretable positional weighting schemes. Surprisingly, only Sonnet 4.5 and Opus 4.5 reliably leverage increased method complexity to improve performance -- Opus 4.6 shows an unexpected regression relative to its predecessor. Overall, our results indicate that LLM-powered evolutionary search is a promising paradigm for automated, interpretable hallucination detector design.