Style-Decoupled Adaptive Routing Network for Underwater Image Enhancement

arXiv cs.CV / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SDAR-Net, an underwater image enhancement framework designed to avoid the limitations of uniform enhancement mappings that fail across mildly versus severely degraded images.
  • SDAR-Net decouples degradation “styles” from the input while preserving static scene structure, using dynamic style embeddings and a separate structural representation learned through a tailored training setup.
  • It adds an adaptive routing mechanism that computes soft weights across different enhancement states based on style features, enabling weighted fusion that better matches each image’s restoration needs.
  • Experiments report new SOTA performance of 25.72 dB PSNR on real-world benchmarks and indicate improved utility for downstream vision tasks, with code released on GitHub.

Abstract

Underwater Image Enhancement (UIE) is essential for robust visual perception in marine applications. However, existing methods predominantly rely on uniform mapping tailored to average dataset distributions, leading to over-processing mildly degraded images or insufficient recovery for severe ones. To address this challenge, we propose a novel adaptive enhancement framework, SDAR-Net. Unlike existing uniform paradigms, it first decouples specific degradation styles from the input and subsequently modulates the enhancement process adaptively. Specifically, since underwater degradation primarily shifts the appearance while keeping the scene structure, SDAR-Net formulates image features into dynamic degradation style embeddings and static scene structural representations through a carefully designed training framework. Subsequently, we introduce an adaptive routing mechanism. By evaluating style features and adaptively predicting soft weights at different enhancement states, it guides the weighted fusion of the corresponding image representations, accurately satisfying the adaptive restoration demands of each image. Extensive experiments show that SDAR-Net achieves a new state-of-the-art (SOTA) performance with a PSNR of 25.72 dB on real-world benchmark, and demonstrates its utility in downstream vision tasks. Our code is available at https://github.com/WHU-USI3DV/SDAR-Net.