Pruned Adaptation Modules: A Simple yet Strong Baseline for Continual Foundation Models
arXiv cs.LG / 2026/3/24
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper argues that foundation-model-based continual learning (FM-based CIL) has advanced without clear comparisons to strong lightweight baselines, making it hard to judge whether improvements are genuine.
- It proposes Pruned Adaptation Modules (PAM), which freezes most of a pre-trained ResNet and adds sparse, task-specific adapter layers for continual adaptation.
- PAM reduces the number of trainable parameters by up to ~5× and the total parameter footprint by up to ~6×, lowering the compute and update cost for continual learning.
- Experiments across multiple benchmarks show PAM mitigates catastrophic forgetting and outperforms existing state-of-the-art FM-based CIL methods.
- The authors present PAM as a simple, transparent baseline to bridge the evaluation gap between traditional CIL and foundation-model-based approaches, with code released on GitHub.

