Distributed Learning with Adversarial Gradient Perturbations

arXiv cs.LG / 5/6/2026

💬 OpinionModels & Research

Key Points

  • The paper studies distributed learning when clients may return adversarially perturbed gradients, constrained only by a distance bound from the true gradient.
  • It analyzes convex and L-smooth optimization settings to determine the smallest achievable sub-optimality (excess error) despite such worst-case gradient manipulation.
  • It also characterizes how many server queries are necessary to guarantee a target sub-optimality gap under adversarial perturbations.
  • The authors provide tight feasibility thresholds for both optimization quality and query complexity, along with algorithms that provably meet those limits.
  • Overall, the work offers theoretical guidance on how robust distributed optimization can be when gradient privacy or tampering introduces bounded but arbitrary errors.

Abstract

Privacy concerns in distributed learning often lead clients to return intentionally altered gradient information. We consider the problem of learning convex and L-smooth functions under adversarial gradient perturbation, where a client's gradient reply to a server query can deviate arbitrarily from the true gradient subject to a distance bound. Our study focuses on two fundamental questions: (i) what is the smallest achievable sub-optimality gap (i.e., excess error in optimization) under such responses, and (ii) how many queries are sufficient to guarantee a given sub-optimality gap? We establish tight feasibility thresholds on the sub-optimality gap and provide algorithms that achieve these thresholds with provable query complexity guarantees.