Towards Realistic Class-Incremental Learning with Free-Flow Increments

arXiv cs.LG / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Class-incremental learning (CIL) is often tested with fixed, equal-sized task schedules, but the paper argues this misses more realistic scenarios where a variable number of new classes arrive at each step.
  • The authors introduce Free-Flow Class-Incremental Learning (FFCIL), a formal setting where unseen classes stream in with highly variable counts, and show that many existing CIL methods become brittle and degrade in performance.
  • They propose a model-agnostic, robustness-focused framework including a class-wise mean (CWM) objective that stabilizes learning by using uniformly aggregated class-conditional supervision rather than sample-frequency weighting.
  • Additional method-wise improvements include constraining distillation to replayed data, normalizing the scale of contrastive and knowledge transfer losses, and adding Dynamic Intervention Weight Alignment (DIWA) to avoid over-adjustment from unstable statistics.
  • Experiments reportedly confirm consistent gains from the proposed strategies across multiple CIL baselines under the new free-flow arrival setting.

Abstract

Class-incremental learning (CIL) is typically evaluated under predefined schedules with equal-sized tasks, leaving more realistic and complex cases unexplored. However, a practical CIL system should learns immediately when any number of new classes arrive, without forcing fixed-size tasks. We formalize this setting as Free-Flow Class-Incremental Learning (FFCIL), where data arrives as a more realistic stream with a highly variable number of unseen classes each step. It will make many existing CIL methods brittle and lead to clear performance degradation. We propose a model-agnostic framework for robust CIL learning under free-flow arrivals. It comprises a class-wise mean (CWM) objective that replaces sample frequency weighted loss with uniformly aggregated class-conditional supervision, thereby stabilizing the learning signal across free-flow class increments, as well as method-wise adjustments that improve robustness for representative CIL paradigms. Specifically, we constrain distillation to replayed data, normalize the scale of contrastive and knowledge transfer losses, and introduce Dynamic Intervention Weight Alignment (DIWA) to prevent over-adjustment caused by unstable statistics from small class increments. Experiments confirm a clear performance degradation across various CIL baselines under FFCIL, while our strategies yield consistent gains.