MagicSeg: Open-World Segmentation Pretraining via Counterfactural Diffusion-Based Auto-Generation

arXiv cs.CV / 3/23/2026

📰 NewsModels & Research

Key Points

  • The paper introduces MagicSeg, a diffusion-model-driven pipeline that generates open-world segmentation datasets by converting class labels into textual descriptions to guide image generation.
  • It creates both positive and counterfactual negative images to enable contrastive training and improve data diversity for segmentation.
  • The pipeline uses an open-vocabulary detector and an interactive segmentation model to extract pixel-level masks from synthetic images, providing pseudo-label supervision for pretraining.
  • MagicSeg achieves state-of-the-art results on PASCAL VOC (62.9%), PASCAL Context (26.7%), and COCO (40.2%), illustrating its effectiveness for open-world semantic segmentation.

Abstract

Open-world semantic segmentation presently relies significantly on extensive image-text pair datasets, which often suffer from a lack of fine-grained pixel annotations on sufficient categories. The acquisition of such data is rendered economically prohibitive due to the substantial investments of both human labor and time. In light of the formidable image generation capabilities of diffusion models, we introduce a novel diffusion model-driven pipeline for automatically generating datasets tailored to the needs of open-world semantic segmentation, named "MagicSeg". Our MagicSeg initiates from class labels and proceeds to generate high-fidelity textual descriptions, which in turn serve as guidance for the diffusion model to generate images. Rather than only generating positive samples for each label, our process encompasses the simultaneous generation of corresponding negative images, designed to serve as paired counterfactual samples for contrastive training. Then, to provide a self-supervised signal for open-world segmentation pretraining, our MagicSeg integrates an open-vocabulary detection model and an interactive segmentation model to extract object masks as precise segmentation labels from images based on the provided category labels. By applying our dataset to the contrastive language-image pretraining model with the pseudo mask supervision and the auxiliary counterfactual contrastive training, the downstream model obtains strong performance on open-world semantic segmentation. We evaluate our model on PASCAL VOC, PASCAL Context, and COCO, achieving SOTA with performance of 62.9%, 26.7%, and 40.2%, respectively, demonstrating our dataset's effectiveness in enhancing open-world semantic segmentation capabilities. Project website: https://github.com/ckxhp/magicseg.