an interpretable vision transformer framework for automated brain tumor classification

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a deep learning framework for automated four-class brain tumor classification (glioma, meningioma, pituitary tumor, and healthy tissue) using 7,023 MRI scans.
  • It builds on a Vision Transformer (ViT-B/16) pretrained on ImageNet-21k and improves performance with clinically motivated preprocessing, including CLAHE to enhance tumor boundary visibility.
  • Training uses a two-stage fine-tuning approach (warm-up with the backbone frozen, then full fine-tuning with discriminative learning rates) plus MixUp and CutMix augmentations for better generalization.
  • The method incorporates EMA-weight smoothing and test-time augmentation (TTA) to stabilize predictions, and uses Attention Rollout to produce interpretable region-based heatmaps.
  • The authors report strong results, achieving 99.29% test accuracy and 99.25% macro F1, with perfect recall for healthy and meningioma classes, outperforming CNN baselines.

Abstract

Brain tumors represent one of the most critical neurological conditions, where early and accurate diagnosis is directly correlated with patient survival rates. Manual interpretation of Magnetic Resonance Imaging (MRI) scans is time-intensive, subject to inter-observer variability, and demands significant specialist expertise. This paper proposes a deep learning framework for automated four-class brain tumor classification distinguishing glioma, meningioma, pituitary tumor, and healthy brain tissue from a dataset of 7,023 MRI scans. The proposed system employs a Vision Transformer (ViT-B/16) pretrained on ImageNet-21k as the backbone, augmented with a clinically motivated preprocessing and training pipeline. Contrast Limited Adaptive Histogram Equalization (CLAHE) is applied to enhance local contrast and accentuate tumor boundaries invisible to standard normalization. A two-stage fine-tuning strategy is adopted: the classification head is warmed up with the backbone frozen, followed by full fine-tuning with discriminative learning rates. MixUp and CutMix augmentation is applied per batch to improve generalization. Exponential Moving Average (EMA) of weights and Test-Time Augmentation (TTA) further stabilize and boost performance. Attention Rollout visualization provides clinically interpretable heatmaps of the brain regions driving each prediction. The proposed model achieves a test accuracy of 99.29%, macro F1-score of 99.25%, and perfect recall on both healthy and meningioma classes, outperforming all CNN-based baselines