SARU: A Shadow-Aware and Removal Unified Framework for Remote Sensing Images with New Benchmarks

arXiv cs.CV / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The SARU paper addresses how shadows in remote sensing images degrade downstream tasks by unifying shadow detection and shadow removal instead of using separate cascaded steps.
  • SARU uses a two-stage design: a dual-branch detection module (DBCSF-Net) that fuses color-space and semantic features to produce high-fidelity shadow masks.
  • For restoration, SARU applies a training-free physical algorithm (N²SGSR) that transfers illumination-related properties from adjacent non-shadow regions using only a single input image.
  • The authors introduce two new benchmarks, RSISD (shadow detection) and SiSRB (single-image shadow removal), to enable more rigorous and comparable evaluation.
  • Experiments show SARU achieves state-of-the-art results on both AISD and the newly introduced benchmarks, while avoiding the need for paired shadow/non-shadow training data.

Abstract

Shadows are a prevalent problem in remote sensing imagery (RSI), degrading visual quality and severely limiting the performance of downstream tasks like object detection and semantic segmentation. Most prior works treat shadow detection and removal as separate, cascaded tasks, which can lead to cumbersome process and error accumulation. Furthermore, many deep learning methods rely on paired shadow and non-shadow images for training, which are often unavailable in practice. To address these challenges, we propose Shadow-Aware and Removal Unified (SARU) Framework , a cohesive two-stage framework. First, its dual-branch detection module (DBCSF-Net) fuses multi-color space and semantic features to generate high-fidelity shadow masks, effectively distinguishing shadows from dark objects. Then, leveraging these masks, a novel, training-free physical algorithm (N^2SGSR) restores illumination by transferring properties from adjacent non-shadow regions within the single input image. To facilitate rigorous evaluation and foster future work, we also introduce two new benchmark datasets: the RSI Shadow Detection (RSISD) dataset and the Single-image Shadow Removal Benchmark (SiSRB). Extensive experiments demonstrate that SARU achieves state-of-the-art performance on both the public AISD dataset and our newly introduced benchmarks. By holistically integrating shadow detection and removal to mitigate error propagation and eliminating the dependency on paired training data, SARU establishes a robust, practical framework for real-world RSI analysis. The source code and datasets are publicly available at: https://github.com/AeroVILab-AHU/SARU-Framework.