Orientation-Aware Unsupervised Domain Adaptation for Brain Tumor Classification Across Multi-Modal MRI

arXiv cs.CV / 5/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses poor real-world generalization of deep learning-based brain tumor classifiers caused by limited labeled MRI data and domain shifts across institutions (different scanners, protocols, and contrast settings).
  • It proposes an orientation-aware unsupervised domain adaptation framework that first classifies mixed 2D MRI slices into axial, sagittal, or coronal views and then uses an orientation-specific ResNet50-based feature extractor for tumor classification.
  • To adapt from multi-modal labeled sources (e.g., T1, T2, FLAIR) to an unlabeled post-contrast T1 target domain, it applies slice-wise unsupervised adaptation with feature-level alignment via maximum mean discrepancy loss.
  • Pseudo-label-guided adaptation is used alongside MMD to maintain class-discriminative information while learning domain-invariant representations.
  • Experiments on the target domain reportedly outperform prior methods, indicating that combining orientation-specific learning, multi-modal transfer, and unsupervised alignment improves robustness for clinical deployment.

Abstract

The clinical integration of deep learning models for brain tumor diagnosis in neuro-oncology is severely constrained by limited expert-annotated MRI data and substantial inter-institutional domain shift arising from variations in scanners, imaging protocols, and contrast settings. These challenges significantly impair model generalization in real-world settings. To address this, we propose a novel orientation-aware unsupervised domain-adaptive framework for automated brain tumor classification using mixed 2D MRI slices. Initially, a CNN with large receptive field first categorizes input slices into axial, sagittal, and coronal views. For each orientation, a CNN architecture with ResNet50 backbone augmented with four fully connected layers is trained to extract discriminative features for tumor classification. To mitigate annotation scarcity and domain discrepancies, we introduce a slice-wise unsupervised domain adaptation strategy that transfers knowledge from the multi-modal such as T1, T2, and FLAIR source domain to the post-contrast T1 target domain. Feature-level alignment is enforced using maximum mean discrepancy loss, complemented by pseudo-label guided adaptation to preserve class discriminability. Extensive experiments demonstrate improved target-domain performance over prior approaches, highlighting the benefits of orientation-specific learning, multi-modal knowledge transfer, pseudo-label-guided adaptation, and unsupervised domain adaptation.