Confident in a Confidence Score: Investigating the Sensitivity of Confidence Scores to Supervised Fine-Tuning

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines how uncertainty/confidence scores in language models behave and how well they correlate with output quality for practical uses like hallucination detection and user alerts.
  • It reports that supervised fine-tuning (SFT) can degrade the correlation between confidence scores and true output quality, indicating that confidence metrics become less reliable after adaptation.
  • The authors attribute the miscorrelation to changes in confidence scores driven by factors unrelated to quality, such as whether outputs resemble the training distribution.
  • A downstream case study shows that ignoring this post-SFT misalignment can significantly reduce the usefulness of confidence scores for real tasks.
  • The work concludes that confidence metrics cannot be used off-the-shelf after fine-tuning and motivates the development/testing of more fine-tuning-robust confidence measures.

Abstract

Uncertainty quantification is a set of techniques that measure confidence in language models. They can be used, for example, to detect hallucinations or alert users to review uncertain predictions. To be useful, these confidence scores must be correlated with the quality of the output. However, recent work found that fine-tuning can affect the correlation between confidence scores and quality. Hence, we investigate the underlying behavior of confidence scores to understand its sensitivity to supervised fine-tuning (SFT). We find that post-SFT, the correlation of various confidence scores degrades, which can stem from changes in confidence scores due to factors other than the output quality, such as the output's similarity to the training distribution. We demonstrate via a case study how failing to address this miscorrelation reduces the usefulness of the confidence scores on a downstream task. Our findings show how confidence metrics cannot be used off-the-shelf without testing, and motivate the need for developing metrics which are more robust to fine-tuning.