Adaptive Stepsizing for Stochastic Gradient Langevin Dynamics in Bayesian Neural Networks

arXiv stat.ML / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key limitation of existing SGMCMC/SGMCMC-style methods for Bayesian neural networks: they are very sensitive to stepsize choice, and common adaptive approaches like pSGLD can fail to sample the correct posterior invariant distribution without expensive divergence correction.
  • It proposes SA-SGLD, an adaptive Stochastic Gradient Langevin Dynamics method that uses time rescaling within the SamAdams framework to modulate effective stepsizes based on a monitored quantity such as the local gradient norm.
  • The authors argue that SA-SGLD improves stability and mixing by automatically shrinking stepsizes in high-curvature regions and expanding them in flatter regions.
  • Experiments show more accurate posterior sampling than standard SGLD on high-curvature 2D toy problems and on image classification tasks with Bayesian neural networks using sharp priors.
  • Overall, the work aims to achieve adaptive sampling behavior without introducing bias into the target posterior distribution.

Abstract

Bayesian neural networks (BNNs) require scalable sampling algorithms to approximate posterior distributions over parameters. Existing stochastic gradient Markov Chain Monte Carlo (SGMCMC) methods are highly sensitive to the choice of stepsize and adaptive variants such as pSGLD typically fail to sample the correct invariant measure without addition of a costly divergence correction term. In this work, we build on the recently proposed `SamAdams' framework for timestep adaptation (Leimkuhler, Lohmann, and Whalley 2025), introducing an adaptive scheme: SA-SGLD, which employs time rescaling to modulate the stepsize according to a monitored quantity (typically the local gradient norm). SA-SGLD can automatically shrink stepsizes in regions of high curvature and expand them in flatter regions, improving both stability and mixing without introducing bias. We show that our method can achieve more accurate posterior sampling than SGLD on high-curvature 2D toy examples and in image classification with BNNs using sharp priors.