Decentralized Proximal Stochastic Gradient Langevin Dynamics

arXiv stat.ML / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DE-PSGLD, a decentralized MCMC method to sample from log-concave distributions while handling constraints over a convex domain.
  • It enforces convex constraints using a shared proximal regularization via the Moreau–Yosida envelope, allowing agents to run unconstrained updates that remain consistent with the constrained target posterior.
  • The authors provide non-asymptotic convergence guarantees measured in 2-Wasserstein distance for each agent’s iterates and for the network-wide average.
  • DE-PSGLD is shown to converge to a regularized Gibbs distribution, with the analysis quantifying the bias caused by the proximal approximation.
  • Experiments on both synthetic and real datasets indicate that this first decentralized approach for constrained domains achieves fast posterior concentration and strong predictive accuracy.

Abstract

We propose Decentralized Proximal Stochastic Gradient Langevin Dynamics (DE-PSGLD), a decentralized Markov chain Monte Carlo (MCMC) algorithm for sampling from a log-concave probability distribution constrained to a convex domain. Constraints are enforced through a shared proximal regularization based on the Moreau-Yosida envelope, enabling unconstrained updates while preserving consistency with the target constrained posterior. We establish non-asymptotic convergence guarantees in the 2-Wasserstein distance for both individual agent iterates and their network averages. Our analysis shows that DE-PSGLD converges to a regularized Gibbs distribution and quantifies the bias introduced by the proximal approximation. We evaluate DE-PSGLD for different sampling problems on synthetic and real datasets. As the first decentralized approach for constrained domains, our algorithm exhibits fast posterior concentration and high predictive accuracy.