A Framework for Variational Inference of Lightweight Bayesian Neural Networks with Heteroscedastic Uncertainties

arXiv stat.ML / 5/1/2026

💬 OpinionModels & Research

Key Points

  • The paper addresses how Bayesian Neural Networks can produce heteroscedastic predictive uncertainties, which are crucial for many real-world applications.
  • Instead of predicting aleatoric uncertainty as extra network outputs (which can increase parameters), it embeds both aleatoric and epistemic uncertainty into the variances of the BNN’s learned parameters to keep models lightweight.
  • The authors combine this parameter-variance embedding with a moment propagation inference method to create a sampling-free variational inference framework.
  • The proposed approach aims to improve predictive performance for small BNNs while reducing the computational burden associated with sampling-based inference.

Abstract

Obtaining heteroscedastic predictive uncertainties from a Bayesian Neural Network (BNN) is vital to many applications. Often, heteroscedastic aleatoric uncertainties are learned as outputs of the BNN in addition to the predictive means, however doing so may necessitate adding more learnable parameters to the network. In this work, we demonstrate that both the heteroscedastic aleatoric and epistemic variance can be embedded into the variances of learned BNN parameters, improving predictive performance for lightweight networks. By complementing this approach with a moment propagation approach to inference, we introduce a relatively simple framework for sampling-free variational inference suitable for lightweight BNNs.

A Framework for Variational Inference of Lightweight Bayesian Neural Networks with Heteroscedastic Uncertainties | AI Navigate