PRIME: Prototype-Driven Multimodal Pretraining for Cancer Prognosis with Missing Modalities

arXiv cs.LG / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • PRIME introduces a missing-aware multimodal self-supervised pretraining framework for cancer prognosis that can learn from patient cohorts where histopathology, gene expression, and pathology report modalities are partially missing.
  • The method aligns heterogeneous modality embeddings into a unified token space and uses a shared prototype memory bank to perform latent-space semantic imputation via patient-level consensus retrieval, avoiding reconstruction of raw signals.
  • PRIME trains with two complementary objectives—inter-modality alignment and post-fusion consistency under structured missingness augmentation—to keep representations predictive across arbitrary modality subsets.
  • Experiments on TCGA using label-free pretraining across 32 cancer types show PRIME achieves the best macro-average performance among compared approaches and improves robustness under test-time missingness for multiple survival and event prediction tasks.
  • The approach is described as supporting parameter-efficient and label-efficient downstream adaptation, suggesting practical deployment in fragmented clinical data settings.

Abstract

Multimodal self-supervised pretraining offers a promising route to cancer prognosis by integrating histopathology whole-slide images, gene expression, and pathology reports, yet most existing approaches require fully paired and complete inputs. In practice, clinical cohorts are fragmented and often miss one or more modalities, limiting both supervised fusion and scalable multimodal pretraining. We propose PRIME, a missing-aware multimodal self-supervised pretraining framework that learns robust and transferable representations from partially observed cohorts. PRIME maps heterogeneous modality embeddings into a unified token space and introduces a shared prototype memory bank for latent-space semantic imputation via patient-level consensus retrieval, producing structurally aligned tokens without reconstructing raw signals. Two complementary pretraining objectives: inter-modality alignment and post-fusion consistency under structured missingness augmentation, jointly learn representations that remain predictive under arbitrary modality subsets. We evaluate PRIME on The Cancer Genome Atlas with label-free pretraining on 32 cancer types and downstream 5-fold evaluation on five cohorts across overall survival prediction, 3-year mortality classification, and 3-year recurrence classification. PRIME achieves the best macro-average performance among all compared methods, reaching 0.653 C-index, 0.689 AUROC, and 0.637 AUROC on the three tasks, respectively, while improving robustness under test-time missingness and supporting parameter-efficient and label-efficient adaptation. These results support missing-aware multimodal pretraining as a practical strategy for prognosis modeling in fragmented clinical data settings.