AI Navigate

Verbalizing LLM's Higher-order Uncertainty via Imprecise Probabilities

arXiv cs.AI / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes novel prompt-based uncertainty elicitation techniques grounded in imprecise probabilities to better capture LLM uncertainty beyond classical probabilistic frameworks.
  • It distinguishes first-order uncertainty (uncertainty over possible responses to a prompt) from second-order uncertainty (uncertainty about the probability model itself).
  • It introduces general-purpose prompting and post-processing procedures to elicit and quantify both orders of uncertainty.
  • It demonstrates effectiveness across diverse settings, enabling more faithful uncertainty reporting and improved downstream decision-making.

Abstract

Despite the growing demand for eliciting uncertainty from large language models (LLMs), empirical evidence suggests that LLM behavior is not always adequately captured by the elicitation techniques developed under the classical probabilistic uncertainty framework. This mismatch leads to systematic failure modes, particularly in settings that involve ambiguous question-answering, in-context learning, and self-reflection. To address this, we propose novel prompt-based uncertainty elicitation techniques grounded in \emph{imprecise probabilities}, a principled framework for repesenting and eliciting higher-order uncertainty. Here, first-order uncertainty captures uncertainty over possible responses to a prompt, while second-order uncertainty (uncertainty about uncertainty) quantifies indeterminacy in the underlying probability model itself. We introduce general-purpose prompting and post-processing procedures to directly elicit and quantify both orders of uncertainty, and demonstrate their effectiveness across diverse settings. Our approach enables more faithful uncertainty reporting from LLMs, improving credibility and supporting downstream decision-making.