Differentiable Conformal Training for LLM Reasoning Factuality

arXiv cs.LG / 4/23/2026

📰 NewsModels & Research

Key Points

  • The paper addresses LLM hallucinations by using conformal prediction to calibrate error rates and filter out claims so hallucination stays below a user-specified threshold (e.g., 10%).
  • It extends earlier conformal factuality methods to multi-step reasoning via dependency graphs that jointly validate a claim with its logical ancestors.
  • The existing approach, “Coherent Factuality,” is not differentiable, forcing reliance on hand-crafted scorers that can discard nearly 60% of true claims at high reliability.
  • The authors propose “Differentiable Coherent Factuality (DCF),” a differentiable relaxation that allows learning better scorers while provably retaining the original statistical confidence guarantees.
  • Experiments on two reasoning benchmarks show DCF can improve claim retention by up to 141% without sacrificing reliability guarantees, advancing toward more reliable conformal LLM systems.

Abstract

Large Language Models (LLMs) frequently hallucinate, limiting their reliability in critical applications. Conformal Prediction (CP) addresses this by calibrating error rates on held-out data to provide statistically valid confidence guarantees. Recent work extends CP to LLM factuality to filter out risky claims, ensuring that hallucination rates remain below a user-specified level (e.g., 10%). While prior methods treat claims independently, Coherent Factuality extends to multi-step reasoning by representing outputs as dependency graphs and jointly validating claims with their logical ancestors. A key limitation is that Coherent Factuality is not differentiable, requiring hand-crafted scorers that at high reliability levels remove nearly 60% of true claims. We introduce Differentiable Coherent Factuality (DCF), a fully differentiable relaxation that enables learning improved scorers while provably recovering the original algorithm's guarantees. Experiments on two benchmark reasoning datasets demonstrate DCF achieves up to 141% improvement in claim retention while maintaining reliability guarantees, representing a significant step towards reliable conformal LLM systems.