AI Navigate

Computation and Communication Efficient Federated Unlearning via On-server Gradient Conflict Mitigation and Expression

arXiv cs.LG / 3/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes Federated On-server Unlearning (FOUL), a two-stage framework consisting of a learning-to-unlearn phase and an on-server knowledge aggregation phase to remove a forget client's data without accessing client data.
  • It introduces a new data setting for Federated Unlearning and a time-to-forget metric to quantify how quickly unlearning achieves optimal performance.
  • Experimental results on three datasets show FOUL outperforms retraining in Federated Unlearning, with competitive or superior unlearning efficiency and significantly reduced time-to-forget, communication, and computation costs.
  • By enabling privacy-preserving and efficient unlearning, FOUL aims to address cross-client knowledge leakage and regulatory requirements in federated learning.

Abstract

Federated Unlearning (FUL) aims to remove specific participants' data contributions from a trained Federated Learning model, thereby ensuring data privacy and compliance with regulatory requirements. Despite its potential, progress in FUL has been limited due to several challenges, including the cross-client knowledge inaccessibility and high computational and communication costs. To overcome these challenges, we propose Federated On-server Unlearning (FOUL), a novel framework that comprises two key stages. The learning-to-unlearn stage serves as a preparatory learning phase, during which the model identifies and encodes the key features associated with the forget clients. This stage is communication-efficient and establishes the basis for the subsequent unlearning process. Subsequently, on-server knowledge aggregation phase aims to perform the unlearning process at the server without requiring access to client data, thereby preserving both efficiency and privacy. We introduce a new data setting for FUL, which enables a more transparent and rigorous evaluation of unlearning. To highlight the effectiveness of our approach, we propose a novel evaluation metric termed time-to-forget, which measures how quickly the model achieves optimal unlearning performance. Extensive experiments conducted on three datasets under various unlearning scenarios demonstrate that FOUL outperforms the Retraining in FUL. Moreover, FOUL achieves competitive or superior results with significantly reduced time-to-forget, while maintaining low communication and computation costs.