On the Surprising Effectiveness of a Single Global Merging in Decentralized Learning

arXiv stat.ML / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes how to schedule communication in decentralized learning, focusing on when and how often devices should synchronize to improve performance.
  • It reports counterintuitive empirical findings that allocating a larger share of the communication budget to later training stages significantly boosts global test accuracy.
  • Under high data heterogeneity, the authors find that using fully connected communication only at the final step—via a single global merging—can substantially improve decentralized learning outcomes.
  • Theoretical results show that the globally merged model from decentralized SGD can achieve the same convergence rate as parallel SGD, reframing part of the local-model discrepancy as a constructive element rather than harmful noise.
  • Overall, the work suggests decentralized learning can generalize well even with limited communication and strong non-IID data, and it points to new directions for model-merge research.
  • Point 2

Abstract

Decentralized learning provides a scalable alternative to parameter-server-based training, yet its performance is often hindered by limited peer-to-peer communication. In this paper, we study how communication should be scheduled over time, including determining when and how frequently devices synchronize. Counterintuitive empirical results show that concentrating communication budgets in the later stages of decentralized training remarkably improves global test performance. Surprisingly, we uncover that fully connected communication at the final step, implemented by a single global merging, can significantly improve the performance of decentralized learning under high data heterogeneity. Our theoretical contributions, which explain these phenomena, are the first to establish that the globally merged model of decentralized SGD can match the convergence rate of parallel SGD. Technically, we reinterpret part of the discrepancy among local models, which were previously considered as detrimental noise, as constructive components essential for matching this rate. This work provides evidence that decentralized learning is able to generalize under high data heterogeneity and limited communication, while offering broad new avenues for model merging research.