Probabilistic Federated Learning on Uncertain and Heterogeneous Data with Model Personalization
arXiv cs.LG / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Meta-BayFL addresses training degradation in federated learning caused by data uncertainty and non-IID heterogeneity by proposing a personalized probabilistic FL method that combines Bayesian neural networks with meta-learning.
- It uses BNN-based client models that incorporate uncertainty across hidden layers to stabilize training on small and noisy local datasets.
- It introduces meta-learning with adaptive learning rates to enable personalized updates under non-IID data, improving local training.
- It presents a unified probabilistic and personalized design that enhances robustness of global model aggregation and provides a theoretical convergence analysis with an upper bound on the global model over communication rounds.
- In experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, Meta-BayFL outperforms state-of-the-art methods (e.g., pFedMe, Ditto, FedFomo) with up to 7.42% higher test accuracy, and discusses runtime, latency, and communication costs for edge/IoT deployment.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to
The Three-Agent Protocol Is Transferable. The Discipline Isn't.
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA