A Self supervised learning framework for imbalanced medical imaging datasets

arXiv cs.CV / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses medical image classification challenges from limited labeled data and long-tailed class imbalance by extending a prior self-supervised learning approach (MIMV) into AMIMV using asymmetric multi-image, multi-view pair construction.
  • It introduces an analysis to test AMIMV robustness across varying imbalance ratios, explicitly targeting a gap in prior work about SSL performance under imbalanced medical datasets.
  • The authors benchmark eight representative self-supervised learning methods across 11 MedMNIST datasets under long-tailed distributions with limited supervision to compare behavior in realistic constraints.
  • Reported improvements include +4.25% on retinaMNIST, +1.88% on tissueMNIST, and +3.1% on DermaMNIST, suggesting AMIMV can better handle both scarcity and rare-class underrepresentation.

Abstract

Two problems often plague medical imaging analysis: 1) Non-availability of large quantities of labeled training data, and 2) Dealing with imbalanced data, i.e., abundant data are available for frequent classes, whereas data are highly limited for the rare class. Self supervised learning (SSL) methods have been proposed to deal with the first problem to a certain extent, but the issue of investigating the robustness of SSL to imbalanced data has rarely been addressed in the domain of medical image classification. In this work, we make the following contributions: 1) The MIMV method proposed by us in an earlier work is extended with a new augmentation strategy to construct asymmetric multi-image, multi-view (AMIMV) pairs to address both data scarcity and dataset imbalance in medical image classification. 2) We carry out a data analysis to evaluate the robustness of AMIMV under varying degrees of class imbalance in medical imaging . 3) We evaluate eight representative SSL methods in 11 medical imaging datasets (MedMNIST) under long-tailed distributions and limited supervision. Our experimental results on the MedMNIST dataset show an improvement of 4.25% on retinaMNIST, 1.88% on tissueMNIST, and 3.1% on DermaMNIST.