Differentiable Conformal Training for LLM Reasoning Factuality
arXiv cs.LG / 4/23/2026
📰 NewsModels & Research
Key Points
- The paper addresses LLM hallucinations by using conformal prediction to calibrate error rates and filter out claims so hallucination stays below a user-specified threshold (e.g., 10%).
- It extends earlier conformal factuality methods to multi-step reasoning via dependency graphs that jointly validate a claim with its logical ancestors.
- The existing approach, “Coherent Factuality,” is not differentiable, forcing reliance on hand-crafted scorers that can discard nearly 60% of true claims at high reliability.
- The authors propose “Differentiable Coherent Factuality (DCF),” a differentiable relaxation that allows learning better scorers while provably retaining the original statistical confidence guarantees.
- Experiments on two reasoning benchmarks show DCF can improve claim retention by up to 141% without sacrificing reliability guarantees, advancing toward more reliable conformal LLM systems.
Related Articles

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to

GPT Image 2 vs DALL-E 3: What Actually Changed in OpenAI's New Image Model
Dev.to

AI Tutor for Science Students — Physics Chemistry Biology Solved by AI
Dev.to