Tighter Performance Theory of FedExProx

arXiv stat.ML / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper re-examines FedExProx, a distributed optimization method that uses extrapolation to improve convergence in parallel proximal algorithms.
  • It finds a surprising issue: the previously claimed guarantees for quadratic optimization are no better than standard Gradient Descent (GD).
  • The authors introduce a new analysis framework that proves a tighter linear convergence rate for non-strongly convex quadratic problems, and shows FedExProx can outperform GD when computation and communication costs are included.
  • The work further studies partial participation and proposes two adaptive extrapolation strategies (gradient diversity and Polyak stepsizes) that substantially improve over earlier results.
  • Beyond quadratics, the analysis is extended to functions satisfying the Polyak–Lojasiewicz condition, with empirical evidence suggesting FedExProx has stronger potential for extrapolation benefits in federated learning.

Abstract

We revisit FedExProx - a recently proposed distributed optimization method designed to enhance convergence properties of parallel proximal algorithms via extrapolation. In the process, we uncover a surprising flaw: its known theoretical guarantees on quadratic optimization tasks are no better than those offered by the vanilla Gradient Descent (GD) method. Motivated by this observation, we develop a novel analysis framework, establishing a tighter linear convergence rate for non-strongly convex quadratic problems. By incorporating both computation and communication costs, we demonstrate that FedExProx can indeed provably outperform GD, in stark contrast to the original analysis. Furthermore, we consider partial participation scenarios and analyze two adaptive extrapolation strategies - based on gradient diversity and Polyak stepsizes - again significantly outperforming previous results. Moving beyond quadratics, we extend the applicability of our analysis to general functions satisfying the Polyak-Lojasiewicz condition, outperforming the previous strongly convex analysis while operating under weaker assumptions. Backed by empirical results, our findings point to a new and stronger potential of FedExProx, paving the way for further exploration of the benefits of extrapolation in federated learning.