Designing to Forget: Deep Semi-parametric Models for Unlearning

arXiv cs.CV / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that machine unlearning difficulty varies by model architecture and proposes deep semi-parametric models (SPMs) tailored for efficient unlearning.
  • SPMs incorporate a fusion module that aggregates per-training-sample information, enabling explicit test-time deletion of selected samples without changing the model’s learned parameters.
  • Experiments on image classification and generation show SPMs maintain competitive task performance versus traditional fully parametric models.
  • On ImageNet classification, SPMs narrow the prediction gap versus an oracle retraining baseline by 11% and deliver more than 10× faster unlearning than prior approaches for parametric models.
  • The authors provide an open-source implementation at the linked GitHub repository.

Abstract

Recent advances in machine unlearning have focused on developing algorithms to remove specific training samples from a trained model. In contrast, we observe that not all models are equally easy to unlearn. Hence, we introduce a family of deep semi-parametric models (SPMs) that exhibit non-parametric behavior during unlearning. SPMs use a fusion module that aggregates information from each training sample, enabling explicit test-time deletion of selected samples without altering model parameters. Empirically, we demonstrate that SPMs achieve competitive task performance to parametric models in image classification and generation, while being significantly more efficient for unlearning. Notably, on ImageNet classification, SPMs reduce the prediction gap relative to a retrained (oracle) baseline by 11\% and achieve over 10\times faster unlearning compared to existing approaches on parametric models. The code is available at https://github.com/amberyzheng/spm_unlearning.