No More Guessing: a Verifiable Gradient Inversion Attack in Federated Learning
arXiv cs.LG / 4/17/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses gradient inversion attacks in federated learning, where attackers reconstruct training samples from aggregated client gradients but often lack a reliable way to verify success.
- It proposes a Verifiable Gradient Inversion Attack (VGIA) that uses a geometric/algebraic view of ReLU leakage and hyperplane boundaries to isolate cases where a region corresponds to exactly one record.
- VGIA includes a subspace-based verification test that provides an explicit certificate of correctness for reconstructed samples, avoiding reliance on subjective plausibility checks.
- Experiments on tabular benchmarks show VGIA can exactly recover the record and target in settings where prior state-of-the-art attacks fail or cannot assess fidelity, while requiring fewer and more effective hyperplane queries.
- By improving both verifiability and efficiency, the work strengthens the privacy risk assessment for federated learning on tabular data, which was previously thought less vulnerable.



