Orthogonal Subspace Projection for Continual Machine Unlearning via SVD-Based LoRA

arXiv cs.LG / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses continual machine unlearning in which multiple sequential deletion requests must be handled without erasing knowledge that should remain.
  • It argues that naïvely combining many sequential LoRA modules causes parameter collisions and strong interference between tasks.
  • The proposed method uses SVD-guided orthogonal subspace projection to constrain each new LoRA update to lie in the orthogonal complement of subspaces used by earlier unlearning tasks.
  • Experiments on CIFAR-100 (ResNet-20) and MNIST show stable performance over long unlearning sequences, avoiding the interference seen in static fusion.
  • In a setting with thirty sequential unlearning tasks, the method maintains baseline retained accuracy (~58.1%) while achieving strong unlearning efficacy, outperforming SOTA static fusion (60.39% → 12.70%).

Abstract

Continual machine unlearning aims to remove the influence of data that should no longer be retained, while preserving the usefulness of the model on everything else. This setting becomes especially difficult when deletion requests arrive sequentially, because the model must repeatedly adapt without erasing previously retained knowledge. Low-Rank Adaptation (LoRA) offers an efficient way to implement such updates, but naively combining many sequential LoRA modules leads to parameter collision, causing \textit{strong interference} between tasks. We propose a static alternative based on Singular Value Decomposition (SVD)-guided orthogonal subspace projection. Our method constrains each new LoRA update during training so that it lies in the orthogonal complement of the subspaces used by earlier unlearning tasks. This preserves task isolation without requiring dynamic routing at deployment. Experiments on CIFAR-100 with ResNet-20 and on MNIST show stable behavior across long sequences of unlearning tasks. After thirty sequential unlearning tasks, state-of-the-art static fusion reduces retained accuracy from 60.39\% to 12.70\%, whereas the proposed in-training constrained optimization maintains baseline performance (\sim58.1\%) while preserving strong unlearning efficacy.