AI Navigate

Residual SODAP: Residual Self-Organizing Domain-Adaptive Prompting with Structural Knowledge Preservation for Continual Learning

arXiv cs.AI / 3/16/2026

📰 NewsModels & Research

Key Points

  • It targets catastrophic forgetting in domain-incremental continual learning and introduces Residual SODAP to jointly adapt representations via prompts while preserving classifier-level knowledge.
  • It combines alpha-entmax sparse prompt selection, residual aggregation, data-free distillation with pseudo-feature replay, prompt-usage-based drift detection, and uncertainty-aware multi-loss balancing to improve robustness under domain shifts.
  • On three domain-incremental benchmarks without task IDs or extra data storage, it achieves state-of-the-art AvgACC/AvgF scores: 0.850/0.047 (DR), 0.760/0.031 (Skin Cancer), and 0.995/0.003 (CORe50).
  • The method is designed for practical deployment by eliminating the need for stored data and explicit task IDs, enabling continual learning in data-tight settings.

Abstract

Continual learning (CL) suffers from catastrophic forgetting, which is exacerbated in domain-incremental learning (DIL) where task identifiers are unavailable and storing past data is infeasible. While prompt-based CL (PCL) adapts representations with a frozen backbone, we observe that prompt-only improvements are often insufficient due to suboptimal prompt selection and classifier-level instability under domain shifts. We propose Residual SODAP, which jointly performs prompt-based representation adaptation and classifier-level knowledge preservation. Our framework combines \alpha-entmax sparse prompt selection with residual aggregation, data-free distillation with pseudo-feature replay, prompt-usage--based drift detection, and uncertainty-aware multi-loss balancing. Across three DIL benchmarks without task IDs or extra data storage, Residual SODAP achieves state-of-the-art AvgACC/AvgF of 0.850/0.047 (DR), 0.760/0.031 (Skin Cancer), and 0.995/0.003 (CORe50).