Bi-CRCL: Bidirectional Conservative-Radical Complementary Learning with Pre-trained Foundation Models for Class-incremental Medical Image Analysis

arXiv cs.CV / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses class-incremental learning for medical image analysis, focusing on retaining prior knowledge while adapting to newly emerging disease categories under privacy constraints that limit memory replay.
  • It introduces Bi-CRCL, a dual-learner framework combining a conservative learner (stability-oriented) and a radical learner (plasticity-oriented) to reduce catastrophic forgetting while enabling continual learning.
  • A bidirectional interaction mechanism supports both forward transfer and backward consolidation, and the system adaptively fuses both learners’ outputs at inference for more robust predictions.
  • The authors report that experiments across five medical imaging datasets show consistent improvements over state-of-the-art PFM-based CIL methods, including scenarios with cross-dataset shifts and different task configurations.

Abstract

Class-incremental learning (CIL) in medical image-guided diagnosis requires retaining prior diagnostic knowledge while adapting to newly emerging disease categories, which is critical for scalable clinical deployment. This problem is particularly challenging due to heterogeneous data and privacy constraints that prevent memory replay. Although pretrained foundation models (PFMs) have advanced general-domain CIL, their potential in medical imaging remains underexplored, where domain-specific adaptation is essential yet difficult due to anatomical complexity and inter-institutional heterogeneity. To address this gap, we conduct a systematic benchmark of recent PFM-based CIL methods and propose Bidirectional Conservative-Radical Complementary Learning (Bi-CRCL), a dual-learner framework inspired by complementary learning systems. Bi-CRCL integrates a conservative learner that preserves prior knowledge through stability-oriented updates and a radical learner that rapidly adapts to new categories via plasticity-oriented learning. A bidirectional interaction mechanism enables forward transfer and backward consolidation, allowing continual integration of new knowledge while mitigating catastrophic forgetting. During inference, outputs from both learners are adaptively fused for robust predictions. Experiments on five medical imaging datasets demonstrate consistent improvements over state-of-the-art methods under diverse settings, including cross-dataset shifts and varying task configurations.