Unbiased Prevalence Estimation with Multicalibrated LLMs

arXiv cs.AI / 4/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses prevalence estimation (e.g., how common a category is) when measurement devices like classifiers or LLMs have known but potentially non-stable error rates across different populations.
  • It shows that the common assumption of stable error rates breaks under covariate shift, causing standard calibration/quantification approaches to become biased.
  • The authors prove that multicalibration—calibrating conditional on input feature segments rather than only on the overall average—can yield unbiased prevalence estimates under covariate shift.
  • Simulations and two real-world empirical studies (U.S. employment prevalence and multilingual political text classification) indicate multicalibration substantially reduces bias, emphasizing the need for calibration data that covers the relevant feature dimensions where populations differ.
  • Although the discussion often centers on LLMs, the theoretical guarantees apply broadly to any classification model, linking fairness-oriented calibration theory to a classic measurement problem across many fields.

Abstract

Estimating the prevalence of a category in a population using imperfect measurement devices (diagnostic tests, classifiers, or large language models) is fundamental to science, public health, and online trust and safety. Standard approaches correct for known device error rates but assume these rates remain stable across populations. We show this assumption fails under covariate shift and that multicalibration, which enforces calibration conditional on the input features rather than just on average, is sufficient for unbiased prevalence estimation under such shift. Standard calibration and quantification methods fail to provide this guarantee. Our work connects recent theoretical work on fairness to a longstanding measurement problem spanning nearly all academic disciplines. A simulation confirms that standard methods exhibit bias growing with shift magnitude, while a multicalibrated estimator maintains near-zero bias. While we focus the discussion mostly on LLMs, our theoretical results apply to any classification model. Two empirical applications -- estimating employment prevalence across U.S. states using the American Community Survey, and classifying political texts across four countries using an LLM -- demonstrate that multicalibration substantially reduces bias in practice, while highlighting that calibration data should cover the key feature dimensions along which target populations may differ.