Spatial Transcriptomics as Images for Large-Scale Pretraining
arXiv cs.CV / 3/17/2026
📰 NewsModels & Research
Key Points
- The paper addresses the ill-posed problem of defining a training sample for large-scale spatial transcriptomics pretraining, noting drawbacks of treating each spot as independent or an entire slide as a single sample.
- It proposes a croppable image-like representation by cropping fixed-size patches from raw slides to preserve spatial context while vastly increasing the number of training samples.
- The approach introduces gene-subset selection rules along the channel dimension to control input dimensionality and improve pretraining stability.
- Experiments show the image-like ST pretraining method consistently improves downstream performance over conventional schemes, with ablations confirming that both spatial patching and channel design are necessary.
Related Articles
How political censorship actually works inside Qwen, DeepSeek, GLM, and Yi: Ablation and behavioral results across 9 models
Reddit r/LocalLLaMA

OpenSeeker's open-source approach aims to break up the data monopoly for AI search agents
THE DECODER

How to Choose the Best AI Chat Models of 2026 for Your Business Needs
Dev.to

I built an AI that generates lesson plans in your exact teaching voice (open source)
Dev.to

6-Band Prompt Decomposition: The Complete Technical Guide
Dev.to