AI Navigate

Enhancing Pretrained Model-based Continual Representation Learning via Guided Random Projection

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes SCL-MGSM, a data-guided mechanism that replaces random initialization of the Random Projection Layer with memory-guided basis selection to adapt pre-trained model representations to downstream tasks.
  • It identifies limitations of RPL-based continual learning under large domain gaps where randomly initialized RPL lacks expressivity and large dimensions destabilize analytic updates.
  • The mechanism constructs a compact but expressive RPL by progressively selecting target-aligned random bases, improving numerical stability of the linear head's updates.
  • Empirical results across exemplar-free Class Incremental Learning benchmarks show SCL-MGSM outperforming state-of-the-art methods.

Abstract

Recent paradigms in Random Projection Layer (RPL)-based continual representation learning have demonstrated superior performance when building upon a pre-trained model (PTM). These methods insert a randomly initialized RPL after a PTM to enhance feature representation in the initial stage. Subsequently, a linear classification head is used for analytic updates in the continual learning stage. However, under severe domain gaps between pre-trained representations and target domains, a randomly initialized RPL exhibits limited expressivity under large domain shifts. While largely scaling up the RPL dimension can improve expressivity, it also induces an ill-conditioned feature matrix, thereby destabilizing the recursive analytic updates of the linear head. To this end, we propose the Stochastic Continual Learner with MemoryGuard Supervisory Mechanism (SCL-MGSM). Unlike random initialization, MGSM constructs the projection layer via a principled, data-guided mechanism that progressively selects target-aligned random bases to adapt the PTM representation to downstream tasks. This facilitates the construction of a compact yet expressive RPL while improving the numerical stability of analytic updates. Extensive experiments on multiple exemplar-free Class Incremental Learning (CIL) benchmarks demonstrate that SCL-MGSM achieves superior performance compared to state-of-the-art methods.