PrivEraserVerify: Efficient, Private, and Verifiable Federated Unlearning

arXiv cs.LG / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • PrivEraserVerify (PEV) proposes a unified federated unlearning framework that targets three needs simultaneously: efficiency, privacy, and verifiability for the right to be forgotten (RTBF).
  • The approach combines adaptive checkpointing for faster reconstruction, layer-adaptive differential privacy calibration to remove a departing client’s influence with less accuracy loss, and fingerprint-based verification to enable decentralized, noninvasive confirmation of unlearning.
  • Experiments across image, handwritten character, and medical datasets indicate unlearning can be 2–3× faster than full retraining while maintaining formal indistinguishability guarantees and reduced performance degradation.
  • The authors claim PEV is the first framework to jointly provide efficiency, privacy, and verifiability for federated unlearning, aiming to make federated learning more practical and regulation-compliant for deployment.

Abstract

Federated learning (FL) enables collaborative model training without sharing raw data, offering a promising path toward privacy preserving artificial intelligence. However, FL models may still memorize sensitive information from participants, conflicting with the right to be forgotten (RTBF). To meet these requirements, federated unlearning has emerged as a mechanism to remove the contribution of departing clients. Existing solutions only partially address this challenge: FedEraser improves efficiency but lacks privacy protection, FedRecovery ensures differential privacy (DP) but degrades accuracy, and VeriFi enables verifiability but introduces overhead without efficiency or privacy guarantees. We present PrivEraserVerify (PEV), a unified framework that integrates efficiency, privacy, and verifiability into federated unlearning. PEV employs (i) adaptive checkpointing to retain critical historical updates for fast reconstruction, (ii) layer adaptive differentially private calibration to selectively remove client influence while minimizing accuracy loss, and (iii) fingerprint based verification, enabling participants to confirm unlearning in a decentralized and noninvasive manner. Experiments on image, handwritten character, and medical datasets show that PEV achieves up to 2 to 3 times faster unlearning than retraining, provides formal indistinguishability guarantees with reduced performance degradation, and supports scalable verification. To the best of our knowledge, PEV is the first framework to simultaneously deliver efficiency, privacy, and verifiability for federated unlearning, moving FL closer to practical and regulation compliant deployment.