Revisiting Active Sequential Prediction-Powered Mean Estimation

arXiv stat.ML / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper revisits active sequential prediction–powered mean estimation, where each round decides the probability of querying the true label based on observed covariates, otherwise using a model’s prediction.
  • It studies a previously proposed method that mixes an uncertainty-based query suggestion with a constant-probability term and finds empirically that the tightest confidence intervals occur when the constant component dominates.
  • The authors provide a new non-asymptotic theoretical analysis with a data-dependent bound for the estimator’s confidence interval.
  • They further show that with a no-regret learning approach for choosing query probabilities, the query probability converges to the maximum allowed constraint under oblivious (covariate-independent) selection.
  • Simulations are used to validate the theoretical results and the observed empirical patterns.

Abstract

In this work, we revisit the problem of active sequential prediction-powered mean estimation, where at each round one must decide the query probability of the ground-truth label upon observing the covariates of a sample. Furthermore, if the label is not queried, the prediction from a machine learning model is used instead. Prior work proposed an elegant scheme that determines the query probability by combining an uncertainty-based suggestion with a constant probability that encodes a soft constraint on the query probability. We explored different values of the mixing parameter and observed an intriguing empirical pattern: the smallest confidence width tends to occur when the weight on the constant probability is close to one, thereby reducing the influence of the uncertainty-based component. Motivated by this observation, we develop a non-asymptotic analysis of the estimator and establish a data-dependent bound on its confidence interval. Our analysis further suggests that when a no-regret learning approach is used to determine the query probability and control this bound, the query probability converges to the constraint of the max value of the query probability when it is chosen obliviously to the current covariates. We also conduct simulations that corroborate these theoretical findings.