Segment Any-Quality Images with Generative Latent Space Enhancement

arXiv cs.CV / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Segment Anything models (SAMs) drop significantly in accuracy on severely degraded, low-quality images, reducing their real-world usability.
  • The paper introduces GleSAM, which enhances robustness by performing generative diffusion in the latent space of a SAM-based segmentation framework to reconstruct higher-quality representations.
  • It adapts latent diffusion concepts to segmentation and adds two techniques to better integrate a pre-trained diffusion model with the SAM/SAM2 segmentation pipeline.
  • GleSAM is designed to work with pre-trained SAM and SAM2 using only minimal additional learnable parameters, enabling efficient training.
  • The authors also release LQSeg, a dataset with diverse degradation types and levels, and show that GleSAM improves performance on complex and unseen degradations while retaining strong results on clear images.

Abstract

Despite their success, Segment Anything Models (SAMs) experience significant performance drops on severely degraded, low-quality images, limiting their effectiveness in real-world scenarios. To address this, we propose GleSAM, which utilizes Generative Latent space Enhancement to boost robustness on low-quality images, thus enabling generalization across various image qualities. Specifically, we adapt the concept of latent diffusion to SAM-based segmentation frameworks and perform the generative diffusion process in the latent space of SAM to reconstruct high-quality representation, thereby improving segmentation. Additionally, we introduce two techniques to improve compatibility between the pre-trained diffusion model and the segmentation framework. Our method can be applied to pre-trained SAM and SAM2 with only minimal additional learnable parameters, allowing for efficient optimization. We also construct the LQSeg dataset with a greater diversity of degradation types and levels for training and evaluating the model. Extensive experiments demonstrate that GleSAM significantly improves segmentation robustness on complex degradations while maintaining generalization to clear images. Furthermore, GleSAM also performs well on unseen degradations, underscoring the versatility of our approach and dataset.