AI Navigate

Learning Generalizable 3D Medical Image Representations from Mask-Guided Self-Supervision

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MASS introduces mask-guided self-supervised learning for 3D medical images, treating in-context segmentation as the pretext task to learn general-purpose representations without annotated data.
  • It relies on automatically generated class-agnostic masks to provide structural supervision, enabling the model to learn semantic definitions of medical structures through a holistic combination of appearance, shape, spatial context, and anatomical relationships.
  • Across data regimes, MASS scales from small single-dataset pretraining to large multi-modal pretraining on 5K CT, MRI, and PET volumes, enabling few-shot segmentation on novel structures and achieving results that surpass self-supervised baselines by more than 20 Dice points when labels are scarce; it also shows that a frozen encoder can match fully supervised training for unseen pathologies with thousands of samples.
  • Code is available on GitHub, making it possible to pursue 3D medical imaging foundation models without expert annotations.

Abstract

Foundation models have transformed vision and language by learning general-purpose representations from large-scale unlabeled data, yet 3D medical imaging lacks analogous approaches. Existing self-supervised methods rely on low-level reconstruction or contrastive objectives that fail to capture the anatomical semantics critical for medical image analysis, limiting transfer to downstream tasks. We present MASS (MAsk-guided Self-Supervised learning), which treats in-context segmentation as the pretext task for learning general-purpose medical imaging representations. MASS's key insight is that automatically generated class-agnostic masks provide sufficient structural supervision for learning semantically rich representations. By training on thousands of diverse mask proposals spanning anatomical structures and pathological findings, MASS learns what semantically defines medical structures: the holistic combination of appearance, shape, spatial context, and anatomical relationships. We demonstrate effectiveness across data regimes: from small-scale pretraining on individual datasets (20-200 scans) to large-scale multi-modal pretraining on 5K CT, MRI, and PET volumes, all without annotations. MASS demonstrates: (i) few-shot segmentation on novel structures, (ii) matching full supervision with only 20-40\% labeled data while outperforming self-supervised baselines by over 20 in Dice score in low-data regimes, and (iii) frozen-encoder classification on unseen pathologies that matches full supervised training with thousands of samples. Mask-guided self-supervised pretraining captures broadly generalizable knowledge, opening a path toward 3D medical imaging foundation models without expert annotations. Code is available: https://github.com/Stanford-AIMI/MASS.