Probabilistic Federated Learning on Uncertain and Heterogeneous Data with Model Personalization
arXiv cs.LG / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Meta-BayFL addresses training degradation in federated learning caused by data uncertainty and non-IID heterogeneity by proposing a personalized probabilistic FL method that combines Bayesian neural networks with meta-learning.
- It uses BNN-based client models that incorporate uncertainty across hidden layers to stabilize training on small and noisy local datasets.
- It introduces meta-learning with adaptive learning rates to enable personalized updates under non-IID data, improving local training.
- It presents a unified probabilistic and personalized design that enhances robustness of global model aggregation and provides a theoretical convergence analysis with an upper bound on the global model over communication rounds.
- In experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, Meta-BayFL outperforms state-of-the-art methods (e.g., pFedMe, Ditto, FedFomo) with up to 7.42% higher test accuracy, and discusses runtime, latency, and communication costs for edge/IoT deployment.
Related Articles

How to Build an AI Team: The Solopreneur Playbook
Dev.to

CrewAI vs AutoGen vs LangGraph: Which Agent Framework to Use
Dev.to

14 Best Self-Hosted Claude Alternatives for AI and Coding in 2026
Dev.to
[P] Finetuned small LMs to VLM adapters locally and wrote a short article about it
Reddit r/MachineLearning
Experiment: How far can a 28M model go in business email generation?
Reddit r/LocalLLaMA