Beyond Surface Statistics: Robust Conformal Prediction for LLMs via Internal Representations

arXiv cs.CL / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses reliability-critical LLM deployments where common uncertainty signals (e.g., token probabilities, entropy, self-consistency) can be poorly calibrated under deployment–training mismatch.
  • It proposes a conformal prediction approach for LLM QA that replaces output-facing uncertainty with nonconformity scores derived from internal representations.
  • Specifically, it introduces Layer-Wise Information (LI) scores that quantify how conditioning on the input changes predictive entropy across the model’s depth.
  • Evaluations on closed-ended and open-domain QA benchmarks show the method improves the validity–efficiency trade-off, with the most pronounced gains under cross-domain shifts, while keeping in-domain reliability at the same nominal risk.
  • The results indicate that internal representational signals can yield more stable and informative conformal scores than surface-level uncertainty measures when distributions shift.

Abstract

Large language models are increasingly deployed in settings where reliability matters, yet output-level uncertainty signals such as token probabilities, entropy, and self-consistency can become brittle under calibration--deployment mismatch. Conformal prediction provides finite-sample validity under exchangeability, but its practical usefulness depends on the quality of the nonconformity score. We propose a conformal framework for LLM question answering that uses internal representations rather than output-facing statistics: specifically, we introduce Layer-Wise Information (LI) scores, which measure how conditioning on the input reshapes predictive entropy across model depth, and use them as nonconformity scores within a standard split conformal pipeline. Across closed-ended and open-domain QA benchmarks, with the clearest gains under cross-domain shift, our method achieves a better validity--efficiency trade-off than strong text-level baselines while maintaining competitive in-domain reliability at the same nominal risk level. These results suggest that internal representations can provide more informative conformal scores when surface-level uncertainty is unstable under distribution shift.