BID-LoRA: A Parameter-Efficient Framework for Continual Learning and Unlearning

arXiv cs.LG / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights a gap in unified systems that can both learn continuously (CL) and remove outdated or sensitive information (machine unlearning, MU) without harming previously acquired knowledge.
  • It shows that simply combining existing continual learning and unlearning methods can cause knowledge leakage and gradual degradation over repeated adaptation cycles.
  • The authors formalize “Continual Learning Unlearning (CLU)” with goals covering precise deletion, efficient knowledge integration, and minimized leakage across cycles.
  • They introduce BID-LoRA, which uses three adapter pathways (retain, new, unlearn) for attention layers plus an “escape unlearning” mechanism that moves forget-class embeddings far from retained knowledge while updating only about 5% of parameters.
  • Experiments on CIFAR-100 and CASIA-Face100 indicate BID-LoRA outperforms CLU baselines across multiple cycles and is positioned for identity management workflows where users may need to be both enrolled and removed.

Abstract

Recent advances in deep learning underscore the need for systems that can not only acquire new knowledge through Continual Learning (CL) but also remove outdated, sensitive, or private information through Machine Unlearning (MU). However, while CL methods are well-developed, MU techniques remain in early stages, creating a critical gap for unified frameworks that depend on both capabilities. We find that naively combining existing CL and MU approaches results in knowledge leakage a gradual degradation of foundational knowledge across repeated adaptation cycles. To address this, we formalize Continual Learning Unlearning (CLU) as a unified paradigm with three key goals: (i) precise deletion of unwanted knowledge, (ii) efficient integration of new knowledge while preserving prior information, and (iii) minimizing knowledge leakage across cycles. We propose Bi-Directional Low-Rank Adaptation (BID-LoRA), a novel framework featuring three dedicated adapter pathways-retain, new, and unlearn applied to attention layers, combined with escape unlearning that pushes forget-class embeddings to positions maximally distant from retained knowledge, updating only 5% of parameters. Experiments on CIFAR-100 show that BID-LoRA outperforms CLU baselines across multiple adaptation cycles. We further evaluate on CASIA-Face100, a curated face recognition subset, demonstrating practical applicability to real-world identity management systems where new users must be enrolled and withdrawn users removed.