Know When You're Wrong: Aligning Confidence with Correctness for LLM Error Detection

arXiv cs.LG / 3/10/2026

Ideas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper introduces a normalized confidence score and a self-evaluation framework to reliably estimate the confidence of large language models across various benchmark tasks and architectures.
  • The study shows that supervised fine-tuning (SFT) produces well-calibrated confidence scores, while reinforcement learning approaches like PPO, GRPO, and DPO lead to overconfidence due to reward exploitation.
  • The authors propose using post-reinforcement learning supervised fine-tuning with self-distillation to restore confidence reliability in models trained with reinforcement learning.
  • Empirical results demonstrate significant improvements in confidence-correctness metrics and calibration error for SFT-trained models, while RL methods degrade confidence reliability.
  • A practical application using adaptive retrieval-augmented generation showcases efficient retrieval operations triggered by the model's confidence, achieving near-maximum accuracy gains with fewer retrievals.

Computer Science > Machine Learning

arXiv:2603.06604 (cs)
[Submitted on 18 Feb 2026]

Title:Know When You're Wrong: Aligning Confidence with Correctness for LLM Error Detection

View a PDF of the paper titled Know When You're Wrong: Aligning Confidence with Correctness for LLM Error Detection, by Xie Xiaohu and Liu Xiaohu and Yao Benjamin
View PDF HTML (experimental)
Abstract:As large language models (LLMs) are increasingly deployed in critical decision-making systems, the lack of reliable methods to measure their uncertainty presents a fundamental trustworthiness risk. We introduce a normalized confidence score based on output anchor token probabilities: classification labels for structured tasks and self-evaluation responses (Yes/No) for open-ended generation. This enables direct detection of errors and hallucinations with minimal overhead and without external validation. We make three key contributions. First, we propose a normalized confidence score and self-evaluation framework that exposes reliable confidence estimates for error detection across seven diverse benchmark tasks and five LLMs of varying architectures and sizes. Second, our theoretical analysis reveals that supervised fine-tuning (SFT) yields well-calibrated confidence through maximum-likelihood estimation, whereas reinforcement learning methods (PPO, GRPO) and DPO induce overconfidence via reward exploitation. Third, we propose post-RL SFT with self-distillation to restore confidence reliability in RL-trained models. Empirical results demonstrated that SFT improved average confidence-correctness AUROC from 0.806 to 0.879 and reduced calibration error from 0.163 to 0.034 on Qwen3-4B, while GRPO and DPO degraded confidence reliability. We demonstrated practical value through adaptive retrieval-augmented generation (RAG) that selectively retrieves context when the model lacks confidence, using only 58\% of retrieval operations to recover 95\% of the maximum achievable accuracy gain on TriviaQA
Subjects: Machine Learning (cs.LG); Computation and Language (cs.CL)
Cite as: arXiv:2603.06604 [cs.LG]
  (or arXiv:2603.06604v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.06604
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Xiaohu Xie [view email]
[v1] Wed, 18 Feb 2026 07:05:12 UTC (70 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Know When You're Wrong: Aligning Confidence with Correctness for LLM Error Detection, by Xie Xiaohu and Liu Xiaohu and Yao Benjamin
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.