Communication-Efficient Distributed Learning with Differential Privacy

arXiv cs.LG / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how to perform nonconvex machine learning across undirected networks while balancing two constraints: minimizing communication and preserving privacy of agents’ data.
  • It proposes a local-training approach that reduces communication frequency, paired with differential-privacy-style protection by perturbing gradients using gradient clipping and additive noise.
  • The authors provide convergence analysis, proving the method reaches a stationary point (up to a bounded distance) for the distributed nonconvex objective.
  • They also derive differential privacy guarantees showing that agents’ training data cannot be inferred from the shared trained model under a defined privacy framework.
  • Experiments on a classification task indicate better accuracy than existing state-of-the-art privacy-preserving methods under the same privacy budget.

Abstract

We address nonconvex learning problems over undirected networks. In particular, we focus on the challenge of designing an algorithm that is both communication-efficient and that guarantees the privacy of the agents' data. The first goal is achieved through a local training approach, which reduces communication frequency. The second goal is achieved by perturbing gradients during local training, specifically through gradient clipping and additive noise. We prove that the resulting algorithm converges to a stationary point of the problem within a bounded distance. Additionally, we provide theoretical privacy guarantees within a differential privacy framework that ensure agents' training data cannot be inferred from the trained model shared over the network. We show the algorithm's superior performance on a classification task under the same privacy budget, compared with state-of-the-art methods.