AI Navigate

Spatial Transcriptomics as Images for Large-Scale Pretraining

arXiv cs.CV / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper addresses the ill-posed problem of defining a training sample for large-scale spatial transcriptomics pretraining, noting drawbacks of treating each spot as independent or an entire slide as a single sample.
  • It proposes a croppable image-like representation by cropping fixed-size patches from raw slides to preserve spatial context while vastly increasing the number of training samples.
  • The approach introduces gene-subset selection rules along the channel dimension to control input dimensionality and improve pretraining stability.
  • Experiments show the image-like ST pretraining method consistently improves downstream performance over conventional schemes, with ablations confirming that both spatial patching and channel design are necessary.

Abstract

Spatial Transcriptomics (ST) profiles thousands of gene expression values at discrete spots with precise coordinates on tissue sections, preserving spatial context essential for clinical and pathological studies. With rising sequencing throughput and advancing platforms, the expanding data volumes motivate large-scale ST pretraining. However, the fundamental unit for pretraining, i.e., what constitutes a single training sample, remains ill-posed. Existing choices fall into two camps: (1) treating each spot as an independent sample, which discards spatial dependencies and collapses ST into single-cell transcriptomics; and (2) treating an entire slide as a single sample, which produces prohibitively large inputs and drastically fewer training examples, undermining effective pretraining. To address this gap, we propose treating spatial transcriptomics as croppable images. Specifically, we define a multi-channel image representation with fixed spatial size by cropping patches from raw slides, thereby preserving spatial context while substantially increasing the number of training samples. Along the channel dimension, we define gene subset selection rules to control input dimensionality and improve pretraining stability. Extensive experiments show that the proposed image-like dataset construction for ST pretraining consistently improves downstream performance, outperforming conventional pretraining schemes. Ablation studies verify that both spatial patching and channel design are necessary, establishing a unified, practical paradigm for organizing ST data and enabling large-scale pretraining.