AI Navigate

MAD: Microenvironment-Aware Distillation -- A Pretraining Strategy for Virtual Spatial Omics from Microscopy

arXiv cs.CV / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MAD is a self-supervised pretraining strategy that learns cell-centric embeddings by jointly self-distilling the morphology view and the microenvironment view into a unified representation.
  • The method is applicable across diverse tissues and imaging modalities and achieves state-of-the-art performance on downstream tasks such as cell subtyping, transcriptomic prediction, and bioinformatic inference.
  • MAD outperforms foundation models with a similar number of parameters that were trained on substantially larger datasets, highlighting its data-efficient strengths.
  • This dual-view distillation approach establishes MAD as a general tool for representation learning in microscopy, enabling virtual spatial omics and biological insights from large microscopy datasets.

Abstract

Bridging microscopy and omics would allow us to read molecular states from images-at single-cell resolution and tissue scale-without the cost and throughput limits of omics technologies. Self-supervised pretraining offers a scalable approach with minimal labels, yet how to encode single-cell identity within tissue environments-and the extent of biological information such models can capture-remains an open question. Here, we introduce MAD (microenvironment-aware distillation), a pretraining strategy that learns cell-centric embeddings by jointly self-distilling the morphology view and the microenvironment view of the same indexed cell into a unified embedding space. Across diverse tissues and imaging modalities, MAD achieves state-of-the-art prediction performance on downstream tasks including cell subtyping, transcriptomic prediction, and bioinformatic inference. MAD even outperforms foundation models with a similar number of model parameters that have been trained on substantially larger datasets. These results demonstrate that MAD's dual-view joint self-distillation effectively captures the complexity and diversity of cells within tissues. Together, this establishes MAD as a general tool for representation learning in microscopy, enabling virtual spatial omics and biological insights from vast microscopy datasets.