AI Navigate

ARES: Scalable and Practical Gradient Inversion Attack in Federated Learning through Activation Recovery

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ARES is a new active gradient inversion attack for federated learning that can reconstruct training samples from large batch sizes without modifying model architecture.
  • The attack formulates the recovery as a noisy sparse recovery problem and uses generalized Lasso, incorporating an imprint-based method to disentangle activations for multi-sample reconstruction.
  • It provides theoretical guarantees (expected recovery rate and an upper bound on reconstruction error) and reports extensive experiments on CNNs and MLPs showing high-fidelity reconstructions under realistic FL settings.
  • The work highlights a serious privacy risk posed by intermediate activations in FL and argues for stronger defenses.

Abstract

Federated Learning (FL) enables collaborative model training by sharing model updates instead of raw data, aiming to protect user privacy. However, recent studies reveal that these shared updates can inadvertently leak sensitive training data through gradient inversion attacks (GIAs). Among them, active GIAs are particularly powerful, enabling high-fidelity reconstruction of individual samples even under large batch sizes. Nevertheless, existing approaches often require architectural modifications, which limit their practical applicability. In this work, we bridge this gap by introducing the Activation REcovery via Sparse inversion (ARES) attack, an active GIA designed to reconstruct training samples from large training batches without requiring architectural modifications. Specifically, we formulate the recovery problem as a noisy sparse recovery task and solve it using the generalized Least Absolute Shrinkage and Selection Operator (Lasso). To extend the attack to multi-sample recovery, ARES incorporates the imprint method to disentangle activations, enabling scalable per-sample reconstruction. We further establish the expected recovery rate and derive an upper bound on the reconstruction error, providing theoretical guarantees for the ARES attack. Extensive experiments on CNNs and MLPs demonstrate that ARES achieves high-fidelity reconstruction across diverse datasets, significantly outperforming prior GIAs under large batch sizes and realistic FL settings. Our results highlight that intermediate activations pose a serious and underestimated privacy risk in FL, underscoring the urgent need for stronger defenses.