Leveraging Spatial Transcriptomics as Alternative to Manual Annotations for Deep Learning-Based Nuclei Analysis

arXiv cs.CV / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the high cost and difficulty of pixel-level manual annotations for deep learning-based nuclei segmentation and classification in pathology images.
  • It proposes using spatial transcriptomics (ST) as supervision by linking cell-level ST measurements to gene expression profiles and corresponding nuclear masks from histopathology.
  • Gene expression profiles are converted into cell-type labels, then used to train an image-based nuclei classification model that bridges gene-expression cell typing with image recognition.
  • The authors evaluate transferability by testing segmentation on previously unseen organs and reporting improved accuracy compared with conventional fully supervised baselines, even with fewer organ types used for training.
  • Classification experiments also show consistent performance gains over existing methods, suggesting the approach improves both segmentation and classification robustness across tissue/staining diversity.

Abstract

Deep learning-based nuclei segmentation and classification in pathology images typically rely on large-scale pixel-level manual annotations, which are costly and difficult to obtain across diverse tissues and staining conditions. To address this limitation, we propose a framework that leverages spatial transcriptomics (ST) data as supervision for nuclei segmentation and classification. By incorporating cell-level ST data, we obtain gene expression profiles and corresponding nuclear masks from histopathological images. Gene expression profiles are converted into cell-type labels and used as training data for image-based classification. Because existing gene expression-based cell-type classification methods are not designed for image recognition, we introduce an image-oriented classification approach that bridges gene expression-based cell typing and image-based cell classification. To evaluate generalization, we conduct segmentation experiments on previously unseen organs and compare our method with conventional supervised models. Despite being trained on fewer organ types, our framework achieves higher segmentation accuracy, demonstrating strong transferability. Classification experiments further show consistent improvements over existing approaches.