A Generative Foundation Model for Multimodal Histopathology

arXiv cs.CV / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 研究では、欠損しがちな病理(H&E)、分子RNA、臨床テキストを統合し、欠損モダリティの補完や生成をより汎用化するための生成型基盤モデル「MuPD(Multimodal Pathology Diffusion)」が提案された。
  • MuPDは拡散トランスフォーマを用い、H&E画像パッチ、テキスト-病理の組、RNA-病理の組を共有潜在空間に埋め込むことで、少量または微調整なしで多様なクロスモーダル生成タスクを可能にする。
  • 性能面では、生成の品質指標であるFIDをドメイン特化モデルに対して50%削減し、少数ショット分類でも合成データ拡張により最大47%の精度向上を報告している。
  • RNA条件付きでの病理画像生成では次点手法比でFIDを23%改善し、さらに“仮想ステイナー”としてH&Eから免疫組織化学や多重免疫蛍光への変換を行い、平均マーカー相関を37%向上させたとされる。

Abstract

Accurate diagnosis and treatment of complex diseases require integrating histological, molecular, and clinical data, yet in practice these modalities are often incomplete owing to tissue scarcity, assay cost, and workflow constraints. Existing computational approaches attempt to impute missing modalities from available data but rely on task-specific models trained on narrow, single source-target pairs, limiting their generalizability. Here we introduce MuPD (Multimodal Pathology Diffusion), a generative foundation model that embeds hematoxylin and eosin (H&E)-stained histology, molecular RNA profiles, and clinical text into a shared latent space through a diffusion transformer with decoupled cross-modal attention. Pretrained on 100 million histology image patches, 1.6 million text-histology pairs, and 10.8 million RNA-histology pairs spanning 34 human organs, MuPD supports diverse cross-modal synthesis tasks with minimal or no task-specific fine-tuning. For text-conditioned and image-to-image generation, MuPD synthesizes histologically faithful tissue architectures, reducing Fr\'echet inception distance (FID) scores by 50% relative to domain-specific models and improving few-shot classification accuracy by up to 47% through synthetic data augmentation. For RNA-conditioned histology generation, MuPD reduces FID by 23% compared with the next-best method while preserving cell-type distributions across five cancer types. As a virtual stainer, MuPD translates H&E images to immunohistochemistry and multiplex immunofluorescence, improving average marker correlation by 37% over existing approaches. These results demonstrate that a single, unified generative model pretrained across heterogeneous pathology modalities can substantially outperform specialized alternatives, providing a scalable computational framework for multimodal histopathology.