FedSEA: Achieving Benefit of Parallelization in Federated Online Learning
arXiv cs.LG / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper studies online federated learning (OFL) where standard adversary assumptions often block any advantage from parallelization and do not model statistical variation sources well.
- It introduces a Stochastically Extended Adversary (SEA) model that, while keeping the loss function fixed across clients over time, allows the adversary to independently and dynamically choose each client’s data distribution at every time step.
- The authors propose the 2OFL algorithm, combining online stochastic gradient descent on clients with periodic global aggregation at the server.
- They prove global network regret bounds, including an O(sqrt(T)) rate for smooth convex losses and an O(log(T)) rate for smooth strongly convex losses.
- The analysis separates spatial (across clients) and temporal (over time) heterogeneity effects and identifies a mild-temporal-variation regime where regret actually improves with parallelization, tightening prior pessimistic results.
Related Articles
Autoencoders and Representation Learning in Vision
Dev.to
Every AI finance app wants your data. I didn’t trust that — so I built my own. Offline.
Dev.to
Control Claude with Just a URL. The Chrome Extension "Send to Claude" Is Incredibly Useful
Dev.to
Google Stitch 2.0: Senior-Level UI in Seconds, But Editing Still Breaks
Dev.to

Now Meta will track what employees do on their computers to train its AI agents
The Verge