Asynchronous Federated Unlearning with Invariance Calibration for Medical Imaging

arXiv cs.LG / 4/30/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations of existing Federated Unlearning (FU) methods, which typically require synchronous coordination and can cause long delays due to slow or heterogeneous client devices.
  • It proposes Asynchronous Federated Unlearning with Invariance Calibration (AFU-IC) for medical imaging, decoupling the unlearning/erasure process from the global federated training workflow.
  • AFU-IC allows a targeted client to perform unlearning asynchronously without halting the federation’s training rounds.
  • A server-side invariance calibration mechanism is introduced to help prevent the model from relearning information from the erased data in later training.
  • Experiments on three medical benchmarks show AFU-IC matches gold-standard retraining in unlearning effectiveness and model fidelity, while substantially reducing wall-clock latency versus synchronous baselines.

Abstract

Federated Unlearning (FU) is an emerging paradigm in Federated Learning (FL) that enables participating clients to fully remove their contributions from a trained global model, driven by data protection regulations that mandate the right to be forgotten. However, existing FU methods mostly rely on synchronous coordination. This requirement forces the entire federation to halt and wait for stragglers to complete erasure, creating significant delays due to device heterogeneity. Furthermore, these methods often face the problem that the influence of erased data is merely suppressed temporarily and resurfaces during subsequent training, rather than being genuinely removed. To overcome these limitations, this paper proposes Asynchronous Federated Unlearning with Invariance Calibration (AFU-IC), a novel framework for medical imaging that decouples the erasure process from the global training workflow. This enables the target client to perform unlearning asynchronously without interrupting global training. Meanwhile, a server-side invariance calibration mechanism prevents the model from relearning the erased data. Extensive experiments on three medical benchmarks demonstrate that AFU-IC achieves unlearning efficacy and model fidelity comparable to gold-standard retraining while significantly reducing wall-clock latency compared to synchronous baselines. AFU-IC ensures efficient, compliant and reliable FL in cross-silo medical environments.