Dynamic Meta-Layer Aggregation for Byzantine-Robust Federated Learning
arXiv cs.LG / 3/18/2026
📰 NewsModels & Research
Key Points
- FedAOT proposes a meta-learning inspired adaptive aggregation framework that weights client updates by reliability to defend federated learning against Byzantine adversaries, including multi-label flipping and untargeted poisoning.
- Unlike defenses that rely on fixed thresholds or attack-type assumptions, FedAOT adapts automatically to diverse datasets and previously unseen attack types.
- The method maintains computational efficiency while improving global model accuracy and resilience in FL settings relevant to healthcare, finance, and IoT.
- Experiments indicate FedAOT substantially boosts robustness across untargeted poisoning scenarios and outperforms prior approaches, offering a scalable solution for secure federated learning.
Related Articles

Jeff Bezos reportedly wants $100 billion to buy and transform old manufacturing firms with AI
TechCrunch
[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)
Reddit r/MachineLearning
My Experience with Qwen 3.5 35B
Reddit r/LocalLLaMA

Cursor’s new coding model Composer 2 is here: It beats Claude Opus 4.6 but still trails GPT-5.4
VentureBeat
Qwen 3.5 122B completely falls apart at ~ 100K context
Reddit r/LocalLLaMA