ARMOR: Adaptive Resilience Against Model Poisoning Attacks in Continual Federated Learning for Mobile Indoor Localization
arXiv cs.LG / 3/23/2026
📰 NewsModels & Research
Key Points
- ARMOR proposes a continual federated learning framework for mobile indoor localization that monitors global model updates to defend against model poisoning.
- It introduces a state-space model that learns the historical evolution of the global model's weight tensors and predicts their next state for comparison with incoming updates.
- By detecting deviations, ARMOR selectively mitigates corrupted updates before aggregation, improving robustness to adversarial attacks and changing indoor environments.
- Experimental results on real-world data show up to 8.0x reduction in mean localization error and about 5x reduction in worst-case error compared with state-of-the-art indoor localization frameworks.
- The work highlights a privacy-preserving, resilient FL approach suitable for resource-constrained mobile devices and heterogeneous deployments.
広告
Related Articles
$500 GPU outperforms Claude Sonnet on coding benchmarks
Dev.to
[D] Real-time Student Attention Detection: ResNet vs Facial Landmarks - Which approach for resource-constrained deployment?
Reddit r/MachineLearning

GLM-5.1 is live – coding ability on par with Claude Opus 4.5
Reddit r/LocalLLaMA
Semantically Self-Aligned Network for Text-to-Image Part-aware PersonRe-identification
Dev.to

FlashAttention from first principles
Reddit r/LocalLLaMA