Decentralized Proximal Stochastic Gradient Langevin Dynamics
arXiv stat.ML / 5/4/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces DE-PSGLD, a decentralized MCMC method to sample from log-concave distributions while handling constraints over a convex domain.
- It enforces convex constraints using a shared proximal regularization via the Moreau–Yosida envelope, allowing agents to run unconstrained updates that remain consistent with the constrained target posterior.
- The authors provide non-asymptotic convergence guarantees measured in 2-Wasserstein distance for each agent’s iterates and for the network-wide average.
- DE-PSGLD is shown to converge to a regularized Gibbs distribution, with the analysis quantifying the bias caused by the proximal approximation.
- Experiments on both synthetic and real datasets indicate that this first decentralized approach for constrained domains achieves fast posterior concentration and strong predictive accuracy.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge
CLMA Frame Test
Dev.to
You Are Right — You Don't Need CLAUDE.md
Dev.to
Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to