Calibeating Prediction-Powered Inference

arXiv stat.ML / 4/24/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper tackles semisupervised mean estimation using a small labeled set, a large unlabeled set, and a black-box prediction model whose outputs may be miscalibrated.
  • It proposes Calibrated Prediction-Powered Inference (post-hoc score calibration on the labeled data) to improve both prediction quality and semisupervised inference without retraining the original model.
  • The authors analyze two calibration methods—linear and isotonic—and provide first-order optimality/efficiency results for isotonic calibration, including that additional post-processing of the fitted isotonic score yields no further first-order gains.
  • They clarify connections to existing estimators: original PPI is a special case of AIPW and can be inefficient when prediction is already well-aligned, while PPI++ is essentially AIPW with empirical efficiency maximization.
  • Experiments (simulations and real data) show the calibrated estimators often outperform PPI and can be competitive with or better than AIPW and PPI++, alongside releasing a Python package (ppi_aipw) for use.

Abstract

We study semisupervised mean estimation with a small labeled sample, a large unlabeled sample, and a black-box prediction model whose output may be miscalibrated. A standard approach in this setting is augmented inverse-probability weighting (AIPW) [Robins et al., 1994], which protects against prediction-model misspecification but can be inefficient when the prediction score is poorly aligned with the outcome scale. We introduce Calibrated Prediction-Powered Inference, which post-hoc calibrates the prediction score on the labeled sample before using it for semisupervised estimation. This simple step requires no retraining and can improve the original score both as a predictor of the outcome and as a regression adjustment for semisupervised inference. We study both linear and isotonic calibration. For isotonic calibration, we establish first-order optimality guarantees: isotonic post-processing can improve predictive accuracy and estimator efficiency relative to the original score and simpler post-processing rules, while no further post-processing of the fitted isotonic score yields additional first-order gains. For linear calibration, we show first-order equivalence to PPI++. We also clarify the relationship among existing estimators, showing that the original PPI estimator is a special case of AIPW and can be inefficient when the prediction model is accurate, while PPI++ is AIPW with empirical efficiency maximization [Rubin et al., 2008]. In simulations and real-data experiments, our calibrated estimators often outperform PPI and are competitive with, or outperform, AIPW and PPI++. We provide an accompanying Python package, ppi_aipw, at https://larsvanderlaan.github.io/ppi-aipw/.