Frequency-Decomposed INR for NIR-Assisted Low-Light RGB Image Denoising

arXiv cs.CV / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper proposes a near-infrared (NIR)-assisted low-light RGB image denoising/restoration method to address severe noise and high-frequency structural degradation in visible images.
  • It introduces Frequency Decoupled Implicit Neural Representation (FDINR), using RGB–NIR cross-modal frequency correlations and multi-scale wavelet transforms to separate low- and high-frequency components.
  • The method uses a dual-branch implicit neural representation with cross-modal differentiated frequency supervision: low-frequency RGB guides luminance and color reconstruction, while high-SNR NIR constrains high-frequency texture generation.
  • An uncertainty-based adaptive weighting loss is added to balance frequency-specific tasks and reduce color distortion and artifacts from rigid spatial-domain fusion.
  • Experiments reportedly show FD-INR improves both luminance consistency and structural detail, and performs better on arbitrary-resolution reconstruction due to its continuous implicit representation.

Abstract

Addressing the issues of severe noise and high frequency structural degradation in visible images under low-light conditions, this paper proposes a Near Infrared (NIR) aided low light image restoration method based on Frequency Decoupled Implicit Neural Representation (FDINR). Based on the statistical prior of RGB-NIR cross-modal frequency correlations, specifically that low-frequency RGB signals are more reliable, whereas high frequency NIR signals exhibit higher correlation, we explicitly decompose images into distinct frequency components via multi-scale wavelet transforms and construct a dual-branch implicit neural representation framework. Within this framework, we design a cross modal differentiated frequency supervision mechanism, leveraging low light RGB to guide the reconstruction of low frequency luminance and color, and utilizing high-SNR NIR signals to constrain the generation of high frequency texture details, thereby achieving complementary advantages in the frequency domain. Furthermore, an uncertainty-based adaptive weighting loss function is introduced to automatically balance the contributions of different frequency tasks, solving the problems of color distortion and artifacts caused by rigid fusion in the spatial domain common in traditional methods. Experimental results demonstrate that FD-INR not only effectively restores image luminance consistency and structural details but also, benefitting from its implicit continuous representation, outperforms existing methods in arbitrary-resolution reconstruction tasks, significantly enhancing the reliability of low light perception.