Weakly-Supervised Lung Nodule Segmentation via Training-Free Guidance of 3D Rectified Flow

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the high cost of voxel-wise 3D lung nodule segmentation by proposing a weakly-supervised method that relies only on image-level labels rather than dense annotations.
  • It introduces a plug-and-play framework that uses a pretrained 3D rectified flow generative model together with a predictor model, applying training-free guidance to improve segmentation quality.
  • The generative model is not retrained; only the predictor is fine-tuned, which aims to reduce compute and data requirements compared with fully supervised or generative retraining approaches.
  • Experiments on LUNA16 show consistent improvements over baseline weakly supervised methods, including more reliable detection of small lung nodules with varying sizes and shapes.
  • The authors argue that generative foundation-model-style components can serve as effective guidance tools for weakly supervised 3D medical image segmentation.

Abstract

Dense annotations, such as segmentation masks, are expensive and time-consuming to obtain, especially for 3D medical images where expert voxel-wise labeling is required. Weakly supervised approaches aim to address this limitation, but often rely on attribution-based methods that struggle to accurately capture small structures such as lung nodules. In this paper, we propose a weakly-supervised segmentation method for lung nodules by combining pretrained state-of-the-art rectified flow and predictor models in a plug-and-play manner. Our approach uses training-free guidance of a 3D rectified flow model, requiring only fine-tuning of the predictor using image-level labels and no retraining of the generative model. The proposed method produces improved-quality segmentations for two separate predictors, consistently detecting lung nodules of varying size and shapes. Experiments on LUNA16 demonstrate improvements over baseline methods, highlighting the potential of generative foundation models as tools for weakly supervised 3D medical image segmentation.