Improving reproducibility by controlling random seed stability in machine learning based estimation via bagging

arXiv stat.ML / 4/21/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how variability across different random seeds in machine learning can destabilize downstream debiased machine learning estimators.
  • It defines “random seed stability” using a concentration condition and proves that subbagging can ensure stability for any bounded-outcome regression algorithm.
  • The authors propose a new debiased machine learning workflow called “adaptive cross-bagging,” designed to remove seed dependence from both nuisance estimation and sample splitting through a modified cross-fitting procedure.
  • Numerical experiments show the proposed method meets the desired stability target, while baseline alternatives either fail to achieve the same level of stability or require much higher computational costs.
  • Compared with standard practice, the approach introduces only a small additional computation overhead, whereas competing methods can incur large penalties.

Abstract

Predictions from machine learning algorithms can vary across random seeds, inducing instability in downstream debiased machine learning estimators. We formalize random seed stability via a concentration condition and prove that subbagging guarantees stability for any bounded-outcome regression algorithm. We introduce a new cross-fitting procedure, adaptive cross-bagging, which simultaneously eliminates seed dependence from both nuisance estimation and sample splitting in debiased machine learning. Numerical experiments confirm that the method achieves the targeted level of stability whereas alternatives do not. Our method incurs a small computational penalty relative to standard practice whereas alternative methods incur large penalties.