Dynamic Meta-Layer Aggregation for Byzantine-Robust Federated Learning
arXiv cs.LG / 3/18/2026
📰 NewsModels & Research
Key Points
- FedAOT proposes a meta-learning inspired adaptive aggregation framework that weights client updates by reliability to defend federated learning against Byzantine adversaries, including multi-label flipping and untargeted poisoning.
- Unlike defenses that rely on fixed thresholds or attack-type assumptions, FedAOT adapts automatically to diverse datasets and previously unseen attack types.
- The method maintains computational efficiency while improving global model accuracy and resilience in FL settings relevant to healthcare, finance, and IoT.
- Experiments indicate FedAOT substantially boosts robustness across untargeted poisoning scenarios and outperforms prior approaches, offering a scalable solution for secure federated learning.
Related Articles
How We Built ScholarNet AI: An AI-Powered Study Platform for Students
Dev.to
Extracting and Following Paths for Robust Relational Reasoning with Large Language Models
arXiv cs.CL
Predictive Photometric Uncertainty in Gaussian Splatting for Novel View Synthesis
arXiv cs.CV
LatentQA: Teaching LLMs to Decode Activations Into Natural Language
arXiv cs.CL
DualEdit: Mitigating Safety Fallback in LLM Backdoor Editing via Affirmation-Refusal Regulation
arXiv cs.CL