AI Navigate

Empowering Semantic-Sensitive Underwater Image Enhancement with VLM

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The work addresses distribution shifts between high-quality enhanced underwater images and natural images that hinder semantic cue extraction for downstream vision tasks in underwater image enhancement (UIE).
  • It proposes a learning mechanism that uses Vision-Language Models to generate textual descriptions of key objects from a degraded image and a text-image alignment model to map these descriptions back onto the image, creating a spatial semantic guidance map.
  • This semantic guidance map steers the UIE network through a dual-guidance mechanism that combines cross-attention and an explicit alignment loss, focusing restoration on semantically important regions.
  • Experiments show that applying the strategy to different UIE baselines significantly boosts perceptual quality metrics and improves performance on detection and segmentation tasks, demonstrating adaptability across models.

Abstract

In recent years, learning-based underwater image enhancement (UIE) techniques have rapidly evolved. However, distribution shifts between high-quality enhanced outputs and natural images can hinder semantic cue extraction for downstream vision tasks, thereby limiting the adaptability of existing enhancement models. To address this challenge, this work proposes a new learning mechanism that leverages Vision-Language Models (VLMs) to empower UIE models with semantic-sensitive capabilities. To be concrete, our strategy first generates textual descriptions of key objects from a degraded image via VLMs. Subsequently, a text-image alignment model remaps these relevant descriptions back onto the image to produce a spatial semantic guidance map. This map then steers the UIE network through a dual-guidance mechanism, which combines cross-attention and an explicit alignment loss. This forces the network to focus its restorative power on semantic-sensitive regions during image reconstruction, rather than pursuing a globally uniform improvement, thereby ensuring the faithful restoration of key object features. Experiments confirm that when our strategy is applied to different UIE baselines, significantly boosts their performance on perceptual quality metrics as well as enhances their performance on detection and segmentation tasks, validating its effectiveness and adaptability.