UCMNet: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration

arXiv cs.CV / 4/2/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces UCMNet, a lightweight uncertainty-aware context memory network designed to restore images captured by under-display cameras suffering from spatially varying diffraction and scattering degradations.
  • Instead of applying uniform restoration, UCMNet uses an uncertainty map learned via an uncertainty-driven loss to adaptively guide recovery of high-frequency details in different regions.
  • The model employs a Memory Bank/Context Bank mechanism to retrieve region-adaptive contextual information, leveraging uncertainty as a prior for better modeling of non-uniform degradations.
  • Experiments report state-of-the-art results on multiple benchmarks while using about 30% fewer parameters than prior approaches.

Abstract

Under-display cameras (UDCs) allow for full-screen designs by positioning the imaging sensor underneath the display. Nonetheless, light diffraction and scattering through the various display layers result in spatially varying and complex degradations, which significantly reduce high-frequency details. Current PSF-based physical modeling techniques and frequency-separation networks are effective at reconstructing low-frequency structures and maintaining overall color consistency. However, they still face challenges in recovering fine details when dealing with complex, spatially varying degradation. To solve this problem, we propose a lightweight \textbf{U}ncertainty-aware \textbf{C}ontext-\textbf{M}emory \textbf{Network} (\textbf{UCMNet}), for UDC image restoration. Unlike previous methods that apply uniform restoration, UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations. The estimated uncertainty maps, learned through an uncertainty-driven loss, quantify spatial uncertainty induced by diffraction and scattering, and guide the Memory Bank to retrieve region-adaptive context from the Context Bank. This process enables effective modeling of the non-uniform degradation characteristics inherent to UDC imaging. Leveraging this uncertainty as a prior, UCMNet achieves state-of-the-art performance on multiple benchmarks with 30\% fewer parameters than previous models. Project page: \href{https://kdhrick2222.github.io/projects/UCMNet/}{https://kdhrick2222.github.io/projects/UCMNet}.