SD-FSMIS: Adapting Stable Diffusion for Few-Shot Medical Image Segmentation

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SD-FSMIS, a framework that adapts a pre-trained Stable Diffusion model to few-shot medical image segmentation, targeting data scarcity and domain shift issues common in medical imaging.
  • It repurposes Stable Diffusion’s conditional generative structure by adding two components: a Support-Query Interaction (SQI) module and a Visual-to-Textual Condition Translator (VTCT) that converts support-set visual cues into an implicit textual embedding for conditioning.
  • Experimental results show SD-FSMIS achieves competitive performance against existing state-of-the-art few-shot segmentation methods in standard evaluation settings.
  • The method also demonstrates strong cross-domain generalization, suggesting diffusion-model priors can transfer well even when the target domain differs from training.

Abstract

Few-Shot Medical Image Segmentation (FSMIS) aims to segment novel object classes in medical images using only minimal annotated examples, addressing the critical challenges of data scarcity and domain shifts prevalent in medical imaging. While Diffusion Models (DM) excel in visual tasks, their potential for FSMIS remains largely unexplored. We propose that the rich visual priors learned by large-scale DMs offer a powerful foundation for a more robust and data-efficient segmentation approach. In this paper, we introduce SD-FSMIS, a novel framework designed to effectively adapt the powerful pre-trained Stable Diffusion (SD) model for the FSMIS task. Our approach repurposes its conditional generative architecture by introducing two key components: a Support-Query Interaction (SQI) and a Visual-to-Textual Condition Translator (VTCT). Specifically, SQI provides a straightforward yet powerful means of adapting SD to the FSMIS paradigm. The VTCT module translates visual cues from the support set into an implicit textual embedding that guides the diffusion model, enabling precise conditioning of the generation process. Extensive experiments demonstrate that SD-FSMIS achieves competitive results compared to state-of-the-art methods in standard settings. Surprisingly, it also demonstrated excellent generalization ability in more challenging cross-domain scenarios. These findings highlight the immense potential of adapting large-scale generative models to advance data-efficient and robust medical image segmentation.