Amortized Variational Inference for Joint Posterior and Predictive Distributions in Bayesian Uncertainty Quantification

arXiv stat.ML / 5/6/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the computational burden of the common two-stage Bayesian workflow that first approximates the parameter posterior and then propagates it to predictions via Monte Carlo sampling.
  • It proposes a variational Bayesian method that directly learns the posterior-predictive distribution by jointly training variational approximations for both the posterior over parameters and the predictive distribution.
  • The approach uses a variational upper bound on the KL divergence along with moment-based regularization terms to improve the quality of the learned distributions.
  • By training the variational distributions in an amortized (offline-to-online) manner, the method moves computation offline and enables faster online predictive inference, particularly for expensive high-fidelity models (e.g., PDE-based).
  • Experiments on analytical benchmarks and a finite-element solid mechanics case show improved predictive accuracy over conventional two-stage variational inference with a substantial reduction in online inference cost.

Abstract

Bayesian predictive inference propagates parameter uncertainty to quantities of interest through the posterior-predictive distribution. In practice, this is typically performed using a two-stage procedure: first approximating the posterior distribution of model parameters, and then propagating posterior samples through the predictive model via Monte Carlo simulation. This sequential workflow can be computationally demanding, particularly for high-fidelity models such as those governed by partial differential equations. We propose a variational Bayesian framework that directly targets the posterior-predictive distribution and jointly learns variational approximations of both the posterior and the corresponding predictive distribution. The formulation introduces a variational upper bound on the Kullback--Leibler divergence together with moment-based regularization terms. The variational distributions are trained in an amortized manner, shifting computational effort to an offline stage and enabling efficient online inference. Numerical experiments ranging from analytical benchmarks to a finite-element solid mechanics problem demonstrate that the proposed method achieves more accurate predictive distributions than conventional two-stage variational inference, while substantially reducing the cost of online predictive inference.