High-Probability Convergence in Decentralized Stochastic Optimization with Gradient Tracking

arXiv cs.LG / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies high-probability convergence guarantees for decentralized stochastic optimization, focusing on extending known mean-squared-error (MSE) advantages to high-probability (HP) settings.
  • It argues that existing HP results for decentralized methods largely rely on stringent assumptions (e.g., bounded data heterogeneity or strong convexity), while bias-correction–style approaches can work under weaker conditions for MSE.
  • The authors analyze a decentralized stochastic gradient descent method enhanced with gradient tracking (GT-DSGD), assuming noise satisfies a relaxed sub-Gaussian condition.
  • They prove order-optimal HP convergence rates for both non-convex objectives and Polyak–Łojasiewicz costs, matching the dependence expected from the MSE regime (up to confidence/log factors).
  • Experiments on real and synthetic data support the theory and show that GT-DSGD delivers superior practical performance while preserving the benefits of bias correction under HP guarantees.

Abstract

We study high-probability (HP) convergence guarantees in decentralized stochastic optimization, where multiple agents collaborate to jointly train a model over a network. Existing HP results in decentralized settings almost exclusively focus on the Decentralized Stochastic Gradient Descent (\mathtt{DSGD}) algorithm, which requires strong assumptions, such as bounded data heterogeneity, or strong convexity of each agent's cost. This is contrary to the mean-squared error (MSE) results, where methods incorporating bias-correction techniques are known to converge under relaxed assumptions and achieve better practical performance. In this paper we provide the first step toward bridging the gap, by studying HP convergence of \mathtt{DSGD} incorporating the gradient tracking technique, in the presence of noise satisfying a relaxed sub-Gaussian condition. We show that the resulting method, dubbed \mathtt{GT-DSGD}, achieves order-optimal HP convergence rates for both non-convex and Polyak-\L{}ojasiewicz costs, of order \mathcal{O}\Big(\frac{\log(1/\delta)}{\sqrt{nT}}\Big) and \mathcal{O}\Big(\frac{\log(1/\delta)}{nT}\Big), respectively, where n is the number of agents, T is the time horizon and \delta \in (0,1) is the confidence parameter. Our results establish that \mathtt{GT-DSGD} converges in the HP sense under the same conditions on the cost as in the MSE sense, while achieving comparable transient times. To the best of our knowledge, these are the first HP guarantees for decentralized optimization methods incorporating bias-correction. Numerical experiments on real and synthetic data verify our theoretical findings, underlining the superior performance of \mathtt{GT-DSGD} and highlighting that the benefits of incorporating bias-correction are also maintained in the HP sense.