Representation Finetuning for Continual Learning
arXiv cs.AI / 3/13/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- CoRe (Continual Representation Learning) shifts fine-tuning from weight space to a representation-space approach to improve continual learning.
- It constrains updates to a low-rank subspace of hidden representations, achieving parameter efficiency while preserving past task stability and future-task plasticity.
- Unlike many PEFT methods, CoRe uses explicit objectives for representation updates to reduce sensitivity to domain shifts and catastrophic forgetting.
- Experimental results across multiple continual learning benchmarks show CoRe outperforms state-of-the-art methods, introducing representation finetuning as a new, interpretable paradigm.




