ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery
arXiv cs.LG / 3/19/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- ARES is a new active gradient inversion attack for federated learning that can reconstruct training samples from large batch sizes without modifying model architecture.
- The attack formulates the recovery as a noisy sparse recovery problem and uses generalized Lasso, incorporating an imprint-based method to disentangle activations for multi-sample reconstruction.
- It provides theoretical guarantees (expected recovery rate and an upper bound on reconstruction error) and reports extensive experiments on CNNs and MLPs showing high-fidelity reconstructions under realistic FL settings.
- The work highlights a serious privacy risk posed by intermediate activations in FL and argues for stronger defenses.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to
The Three-Agent Protocol Is Transferable. The Discipline Isn't.
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA