Abstract
We study high-probability (HP) convergence guarantees in decentralized stochastic optimization, where multiple agents collaborate to jointly train a model over a network. Existing HP results in decentralized settings almost exclusively focus on the Decentralized Stochastic Gradient Descent (\mathtt{DSGD}) algorithm, which requires strong assumptions, such as bounded data heterogeneity, or strong convexity of each agent's cost. This is contrary to the mean-squared error (MSE) results, where methods incorporating bias-correction techniques are known to converge under relaxed assumptions and achieve better practical performance. In this paper we provide the first step toward bridging the gap, by studying HP convergence of \mathtt{DSGD} incorporating the gradient tracking technique, in the presence of noise satisfying a relaxed sub-Gaussian condition. We show that the resulting method, dubbed \mathtt{GT-DSGD}, achieves order-optimal HP convergence rates for both non-convex and Polyak-\L{}ojasiewicz costs, of order \mathcal{O}\Big(\frac{\log(1/\delta)}{\sqrt{nT}}\Big) and \mathcal{O}\Big(\frac{\log(1/\delta)}{nT}\Big), respectively, where n is the number of agents, T is the time horizon and \delta \in (0,1) is the confidence parameter. Our results establish that \mathtt{GT-DSGD} converges in the HP sense under the same conditions on the cost as in the MSE sense, while achieving comparable transient times. To the best of our knowledge, these are the first HP guarantees for decentralized optimization methods incorporating bias-correction. Numerical experiments on real and synthetic data verify our theoretical findings, underlining the superior performance of \mathtt{GT-DSGD} and highlighting that the benefits of incorporating bias-correction are also maintained in the HP sense.