Beyond "I Don't Know": Evaluating LLM Self-Awareness in Discriminating Data and Model Uncertainty

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that LLMs should be able to abstain when confidence is low, but existing work often treats refusals as a generic “I don’t know,” without distinguishing whether uncertainty comes from ambiguous input data or from limitations of the model itself.
  • It introduces UA-Bench, a benchmark with 3,500+ questions across six datasets, specifically intended to test whether LLMs can explicitly attribute uncertainty to data uncertainty versus model uncertainty.
  • An evaluation of 18 frontier LLMs finds that even top models struggle to reliably make this distinction, and that high answer accuracy does not necessarily correlate with strong uncertainty attribution.
  • To address the gap, the authors propose a lightweight data synthesis plus reinforcement learning approach, reporting improvements in uncertainty attribution while maintaining answer accuracy on Qwen3-4B-Instruct-2507 and Qwen3-8B (thinking mode).
  • The authors state that their code and data are publicly available.

Abstract

Reliable Large Language Models (LLMs) should abstain when confidence is insufficient. However, prior studies often treat refusal as a generic "I don't know'', failing to distinguish input-level ambiguity (data uncertainty) from capability limitations (model uncertainty). This lack of distinction limits downstream action decisions like requesting clarification or invoking external tools. In this work, we introduce UA-Bench, a benchmark of over 3,500 questions drawn from six datasets spanning knowledge-intensive and reasoning-intensive tasks, designed to evaluate explicit uncertainty attribution. An evaluation of 18 frontier LLMs shows that even state-of-the-art models struggle to reliably discriminate between data uncertainty and model uncertainty, and that high answer accuracy does not necessarily imply strong uncertainty attribution ability. To narrow this gap, we propose a lightweight data synthesis and reinforcement learning strategy. Experiments on both Qwen3-4B-Instruct-2507 and Qwen3-8B in thinking mode show that the proposed method improves uncertainty attribution while preserving answer accuracy. Our code and data are publicly available now.