No More Guessing: a Verifiable Gradient Inversion Attack in Federated Learning

arXiv cs.LG / 4/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses gradient inversion attacks in federated learning, where attackers reconstruct training samples from aggregated client gradients but often lack a reliable way to verify success.
  • It proposes a Verifiable Gradient Inversion Attack (VGIA) that uses a geometric/algebraic view of ReLU leakage and hyperplane boundaries to isolate cases where a region corresponds to exactly one record.
  • VGIA includes a subspace-based verification test that provides an explicit certificate of correctness for reconstructed samples, avoiding reliance on subjective plausibility checks.
  • Experiments on tabular benchmarks show VGIA can exactly recover the record and target in settings where prior state-of-the-art attacks fail or cannot assess fidelity, while requiring fewer and more effective hyperplane queries.
  • By improving both verifiability and efficiency, the work strengthens the privacy risk assessment for federated learning on tabular data, which was previously thought less vulnerable.

Abstract

Gradient inversion attacks threaten client privacy in federated learning by reconstructing training samples from clients' shared gradients. Gradients aggregate contributions from multiple records and existing attacks may fail to disentangle them, yielding incorrect reconstructions with no intrinsic way to certify success. In vision and language, attackers may fall back on human inspection to judge reconstruction plausibility, but this is far less feasible for numerical tabular records, fueling the impression that tabular data is less vulnerable. We challenge this perception by proposing a verifiable gradient inversion attack (VGIA) that provides an explicit certificate of correctness for reconstructed samples. Our method adopts a geometric view of ReLU leakage: the activation boundary of a fully connected layer defines a hyperplane in input space. VGIA introduces an algebraic, subspace-based verification test that detects when a hyperplane-delimited region contains exactly one record. Once isolation is certified, VGIA recovers the corresponding feature vector analytically and reconstructs the target via a lightweight optimization step. Experiments on tabular benchmarks with large batch sizes demonstrate exact record and target recovery in regimes where existing state-of-the-art attacks either fail or cannot assess reconstruction fidelity. Compared to prior geometric approaches, VGIA allocates hyperplane queries more effectively, yielding faster reconstructions with fewer attack rounds.