AI Navigate

FedPBS: Proximal-Balanced Scaling Federated Learning Model for Robust Personalized Training for Non-IID Data

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • FedPBS is a new federated learning algorithm that combines ideas from FedBS and FedProx to address statistical heterogeneity and uneven client participation in highly non-IID data.
  • It dynamically adapts batch sizes to each client's resources to enable balanced and scalable participation, and applies a proximal correction for small-batch clients to stabilize local updates and reduce divergence from the global model.
  • In experiments on CIFAR-10 and UCI-HAR under severe non-IID settings, FedPBS consistently outperforms state-of-the-art baselines such as FedBS, FedGA, MOON, and FedProx, with smooth loss curves indicating stable convergence.
  • The results suggest that FedPBS provides robust performance gains across diverse federated environments, highlighting its potential for robust personalized training in real-world deployments.

Abstract

Federated learning (FL) enables a set of distributed clients to jointly train machine learning models while preserving their local data privacy, making it attractive for applications in healthcare, finance, mobility, and smart-city systems. However, FL faces several challenges, including statistical heterogeneity and uneven client participation, which can degrade convergence and model quality. In this work, we propose FedPBS, an FL algorithm that couples complementary ideas from FedBS and FedProx to address these challenges. FedPBS dynamically adapts batch sizes to client resources to support balanced and scalable participation, and selectively applies a proximal correction to small-batch clients to stabilize local updates and reduce divergence from the global model. Experiments on benchmarking datasets such as CIFAR-10 and UCI-HAR under highly non-IID settings demonstrate that FedPBS consistently outperforms state-of-the-art methods, including FedBS, FedGA, MOON, and FedProx. The results demonstrate robust performance gains under extreme data heterogeneity, with smooth loss curves indicating stable convergence across diverse federated environments. FedPBS consistently outperforms state-of-the-art federated learning baselines on UCI-HAR and CIFAR-10 under severe non-IID conditions while maintaining stable and reliable convergence.