A Divergence-Based Method for Weighting and Averaging Model Predictions

arXiv stat.ML / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a divergence-based framework for computing model weights that enable averaging probabilistic predictions across statistical and machine-learning models.
  • The weighting method is designed to be model-agnostic, working whether the component models are fitted using frequentist, Bayesian, or other approaches.
  • Experiments show the approach performs better than or comparably to common techniques such as stacking and Akaike-style negative exponentiated model weighting, with particular gains in small-sample regimes.
  • The authors provide a theoretical explanation for why the method tends to have an advantage when data samples are limited.

Abstract

This paper uses a minimum divergence framework to introduce a new way of calculating model weights that can be used to average probabilistic predictions from statistical and machine learning models. The method is general and can be applied regardless of whether the models under consideration are fit to data using frequentist, Bayesian, or some other fitting method. The proposed method is motivated in two different ways and is shown empirically to perform better than or on a par with standard model averaging methods, including model stacking and model averaging that relies on Akaike-style negative exponentiated model weighting, especially when the sample size is small. Our theoretical analysis explains why the method has a small-sample advantage.