AI Navigate

Trust the Unreliability: Inward Backward Dynamic Unreliability Driven Coreset Selection for Medical Image Classification

arXiv cs.CV / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Dynamic Unreliability-Driven Coreset Selection (DUCS) for medical image classification to address limitations of traditional coreset selection caused by intra-class variation and inter-class similarity.
  • It introduces inward self-awareness (analyzing the evolution of confidence to quantify per-sample uncertainty) and backward memory tracking (monitoring how often samples are forgotten during training) to evaluate sample reliability.
  • The method selects unreliable samples that exhibit confidence fluctuations and are repeatedly forgotten, targeting samples near the decision boundary to help refine the boundary.
  • Empirical results on public medical datasets show DUCS outperforms state-of-the-art methods, particularly at high compression rates, indicating improved data efficiency for medical image classification.

Abstract

Efficiently managing and utilizing large-scale medical imaging datasets with limited resources presents significant challenges. While coreset selection helps reduce computational costs, its effectiveness in medical data remains limited due to inherent complexity, such as large intra-class variation and high inter-class similarity. To address this, we revisit the training process and observe that neural networks consistently produce stable confidence predictions and better remember samples near class centers in training. However, concentrating on these samples may complicate the modeling of decision boundaries. Hence, we argue that the more unreliable samples are, in fact, the more informative in helping build the decision boundary. Based on this, we propose the Dynamic Unreliability-Driven Coreset Selection(DUCS) strategy. Specifically, we introduce an inward-backward unreliability assessment perspective: 1) Inward Self-Awareness: The model introspects its behavior by analyzing the evolution of confidence during training, thereby quantifying uncertainty of each sample. 2) Backward Memory Tracking: The model reflects on its training tracking by tracking the frequency of forgetting samples, thus evaluating its retention ability for each sample. Next, we select unreliable samples that exhibit substantial confidence fluctuations and are repeatedly forgotten during training. This selection process ensures that the chosen samples are near the decision boundary, thereby aiding the model in refining the boundary. Extensive experiments on public medical datasets demonstrate our superior performance compared to state-of-the-art(SOTA) methods, particularly at high compression rates.