Machine Unlearning for Class Removal through SISA-based Deep Neural Network Architectures

arXiv cs.CV / 5/1/2026

📰 NewsModels & Research

Key Points

  • The paper addresses privacy and consent concerns by studying machine unlearning, specifically removing particular classes of data from trained CNN models without full retraining.
  • It proposes a modified SISA (Sharded, Isolated, Sliced, and Aggregated) framework tailored for class-level unlearning in convolutional neural networks.
  • The approach adds a reinforced replay mechanism and a gating network to improve the efficiency of selective forgetting.
  • Experiments across multiple image datasets and CNN configurations show effective class unlearning while maintaining model performance and lowering the computational overhead compared with retraining.
  • The authors provide an open-source implementation to support deployment and further research in privacy-sensitive AI systems.

Abstract

The rapid proliferation of image generation models and other artificial intelligence (AI) systems has intensified concerns regarding data privacy and user consent. As the availability of public datasets declines, major technology companies increasingly rely on proprietary or private user data for model training, raising ethical and legal challenges when users request the deletion of their data after it has influenced a trained model. Machine unlearning seeks to address this issue by enabling the removal of specific data from models without complete retraining. This study investigates a modified SISA (Sharded, Isolated, Sliced, and Aggregated) framework designed to achieve class-level unlearning in Convolutional Neural Network (CNN) architectures. The proposed framework incorporates a reinforced replay mechanism and a gating network to enhance selective forgetting efficiency. Experimental evaluations across multiple image datasets and CNN configurations demonstrate that the modified SISA approach enables effective class unlearning while preserving model performance and reducing retraining overhead. The findings highlight the potential of SISA-based unlearning for deployment in privacy-sensitive AI applications. The implementation is publicly available at https://github.com/SiamFS/ sisa-class-unlearning.