RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent

arXiv stat.ML / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies decentralized machine learning under communication constraints, highlighting man-in-the-middle (MITM) attacks that can arbitrarily alter messages and inject malicious updates during training.
  • It introduces RESIST, a multistep consensus gradient descent algorithm combined with robust statistics-based screening to suppress the effect of adversarially compromised links.
  • The authors claim RESIST provides stronger guarantees than prior approaches by achieving algorithmic and statistical convergence for strongly convex, Polyak–Łojasiewicz, and nonconvex empirical risk minimization (ERM) settings.
  • Experimental results are reported to show robustness and scalability across different attack strategies, screening methods, and loss functions, supporting the proposed defense’s practical viability.

Abstract

Empirical risk minimization (ERM) is a cornerstone of modern machine learning (ML), supported by advances in optimization theory that ensure efficient solutions with provable algorithmic and statistical learning rates. Privacy, memory, computation, and communication constraints necessitate data collection, processing, and storage across network-connected devices. In many applications, networks operate in decentralized settings where a central server cannot be assumed, requiring decentralized ML algorithms that are efficient and resilient. Decentralized learning, however, faces significant challenges, including an increased attack surface. This paper focuses on the man-in-the-middle (MITM) attack, wherein adversaries exploit communication vulnerabilities to inject malicious updates during training, potentially causing models to deviate from their intended ERM solutions. To address this challenge, we propose RESIST (Resilient dEcentralized learning using conSensus gradIent deScenT), an optimization algorithm designed to be robust against adversarially compromised communication links, where transmitted information may be arbitrarily altered before being received. Unlike existing adversarially robust decentralized learning methods, which often (i) guarantee convergence only to a neighborhood of the solution, (ii) lack guarantees of linear convergence for strongly convex problems, or (iii) fail to ensure statistical consistency as sample sizes grow, RESIST overcomes all three limitations. It achieves algorithmic and statistical convergence for strongly convex, Polyak-Lojasiewicz, and nonconvex ERM problems by employing a multistep consensus gradient descent framework and robust statistics-based screening methods to mitigate the impact of MITM attacks. Experimental results demonstrate the robustness and scalability of RESIST across attack strategies, screening methods, and loss functions.

RESIST: Resilient Decentralized Learning Using Consensus Gradient Descent | AI Navigate