Textual Bayes: Quantifying Prompt Uncertainty in LLM-Based Systems

arXiv stat.ML / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles the open problem of accurately quantifying uncertainty in LLM-based systems, especially in high-stakes settings where miscalibration can be costly.
  • It proposes a Bayesian framing of prompts by treating prompt text as textual parameters in a statistical model, enabling uncertainty quantification over both prompt parameters and downstream predictions.
  • It introduces an MCMC method called Metropolis-Hastings through LLM Proposals (MHLP) that combines prompt optimization ideas with standard Markov chain Monte Carlo to make Bayesian inference practical for prompts.
  • MHLP is presented as a “turnkey” modification that can work even with closed-source, black-box LLMs and improves both predictive accuracy and uncertainty quantification across multiple benchmarks.
  • More broadly, the work argues for integrating established Bayesian methods into the LLM era to build more reliable and better-calibrated LLM-based systems.

Abstract

Although large language models (LLMs) are becoming increasingly capable of solving challenging real-world tasks, accurately quantifying their uncertainty remains a critical open problem--one that limits their applicability in high-stakes domains. This challenge is further compounded by the closed-source, black-box nature of many state-of-the-art LLMs. Moreover, LLM-based systems can be highly sensitive to the prompts that bind them together, which often require significant manual tuning (i.e., prompt engineering). In this work, we address these challenges by viewing LLM-based systems through a Bayesian lens. We interpret prompts as textual parameters in a statistical model, allowing us to use a small training dataset to perform Bayesian inference over these prompts. This novel perspective enables principled uncertainty quantification over both the model's textual parameters and its downstream predictions, while also incorporating prior beliefs about these parameters expressed in free-form text. To perform Bayesian inference--a difficult problem even for well-studied data modalities--we introduce Metropolis-Hastings through LLM Proposals (MHLP), a novel Markov chain Monte Carlo (MCMC) algorithm that combines prompt optimization techniques with standard MCMC methods. MHLP is a turnkey modification to existing LLM pipelines, including those that rely exclusively on closed-source models. Empirically, we demonstrate that our method yields improvements in both predictive accuracy and uncertainty quantification (UQ) on a range of LLM benchmarks and UQ tasks. More broadly, our work demonstrates a viable path for incorporating methods from the rich Bayesian literature into the era of LLMs, paving the way for more reliable and calibrated LLM-based systems.