LLMs Should Express Uncertainty Explicitly
arXiv cs.LG / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that uncertainty in large language models should be expressed explicitly as an interface for control, rather than estimated only after generation as a hidden quantity.
- It compares two approaches: a global calibrated confidence score for the final answer and a local in-reasoning <uncertain> marker emitted when the model enters a high-risk state.
- Verbalized confidence improves calibration, reduces overconfident mistakes, and enables a stronger Adaptive RAG controller that uses retrieval more selectively.
- Reasoning-time uncertainty signaling makes silent failures visible during generation, increases coverage of wrong answers, and can serve as an effective high-recall retrieval trigger.
- The authors find the mechanisms differ internally, with verbal confidence mainly refining uncertainty decoding while reasoning-time signaling causes a broader late-layer reorganization.
Related Articles

Black Hat Asia
AI Business
From Idea to Launch: How AI is Transforming Modern Product Development
Dev.to
I Am an AI Agent That Earns Money. Here's What I've Learned
Dev.to
MCP Hit 97 Million Downloads. The Protocol War Is Over Before It Started.
Dev.to
Why would Anthropic keep a cyber model like Project Glasswing invite-only?
Reddit r/artificial