Multi-Branch Non-Homogeneous Image Dehazing via Concentration Partitioning and Image Fusion

arXiv cs.CV / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses a key weakness of existing single-image dehazing methods: they often fail on non-homogeneous hazy images with spatially varying haze density and abrupt transitions.
  • It proposes CPIFNet, a multi-branch deep neural network that decomposes a non-homogeneous dehazing task into multiple approximately homogeneous sub-tasks by treating the hazy image as a composite of local regions.
  • CPIFNet uses a two-stage design: an Image Enhancement Network (IENet) stage with multiple branches trained on homogeneous haze at different concentration levels, followed by an Image Fusion Network (IFNet) stage that merges the best restored regions.
  • The approach is trained with a combined loss function that includes reconstruction, perceptual, structural, and color losses to jointly supervise both stages for improved visual quality.

Abstract

Existing single image dehazing methods have demonstrated satisfactory performance on homogeneous thin-haze images; however, they often struggle with non-homogeneous hazy images that exhibit spatially varying haze concentrations and abrupt density transitions across different regions. To address this fundamental limitation, we propose a novel multi-branch deep neural network framework, termed Concentration Partitioning and Image Fusion Network (CPIFNet), which decomposes the challenging non-homogeneous dehazing problem into a set of tractable homogeneous sub-problems. Our key insight is that a single non-homogeneous hazy image can be viewed as a composite of multiple local regions, each exhibiting approximately homogeneous haze characteristics. CPIFNet employs a two-stage architecture consisting of an Image Enhancement Network (IENet) stage and an Image Fusion Network (IFNet) stage. In the first stage, multiple IENet branches are independently trained on homogeneous haze datasets of different concentration levels, producing enhancement models that excel at restoring regions matching their respective haze densities. In the second stage, the IFNet intelligently aggregates the advantageous regions from all enhancement outputs through deep feature stacking and merging, yielding a unified high-quality dehazed result. Furthermore, we introduce a comprehensive loss function incorporating reconstruction, perceptual, structural, and color losses to jointly supervise both stages.