Is Prompt Selection Necessary for Task-Free Online Continual Learning?

arXiv cs.LG / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates task-free online continual learning, where data streams are non-stationary and task boundaries are absent, and argues that prompt selection strategies frequently choose poor prompts and underperform.
  • It proposes a simpler alternative called SinglePrompt that removes the need for prompt selection by injecting one fixed prompt into each self-attention block and optimizing the classifier more directly.
  • The method uses a cosine-similarity-based logit formulation to reduce classifier forgetting and applies masking to logits for classes not present in the current minibatch.
  • The authors report state-of-the-art performance across multiple online continual learning benchmarks and provide source code via their GitHub repository.
  • Overall, the work suggests that, in task-free continual learning settings, carefully designed single-prompt conditioning and classifier optimization can outperform adaptive prompt-selection approaches.

Abstract

Task-free online continual learning has recently emerged as a realistic paradigm for addressing continual learning in dynamic, real-world environments, where data arrive in a non-stationary stream without clear task boundaries and can only be observed once. To consider such challenging scenarios, many recent approaches have employed prompt selection, an adaptive strategy that selects prompts from a pool based on input signals. However, we observe that such selection strategies often fail to select appropriate prompts, yielding suboptimal results despite additional training of key parameters. Motivated by this observation, we propose a simple yet effective SinglePrompt that eliminates the need for prompt selection and focuses on classifier optimization. Specifically, we simply (i) inject a single prompt into each self-attention block, (ii) employ a cosine similarity-based logit design to alleviate the forgetting effect inherent in the classifier weights, and (iii) mask logits for unexposed classes in the current minibatch. With this simple task-free design, our framework achieves state-of-the-art performance across various online continual learning benchmarks. Source code is available at https://github.com/efficient-learning-lab/SinglePrompt.