JND-Guided Neural Watermarking with Spatial Transformer Decoding for Screen-Capture Robustness

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an end-to-end deep learning framework for screen-capture-robust neural watermarking that jointly optimizes watermark embedding and extraction under realistic camera/screen distortions.
  • It introduces a noise simulation layer (including a physically motivated Moiré pattern generator) and adversarial training to improve robustness against coupled artifacts such as moiré, color-gamut shifts, perspective warping, and sensor noise.
  • A JND (Just Noticeable Distortion) perceptual loss adaptively controls embedding strength by matching watermark residuals to a JND coefficient map, aiming to preserve visual quality.
  • Two automatic localization components—foreground extraction via semantic segmentation and a symmetric noise-template mechanism for anti-cropping recovery—enable largely automated decoding in deployment-like conditions.
  • Experiments report strong reconstruction/quality metrics (average PSNR ~30.94 dB, SSIM ~0.94) while embedding 127-bit payloads under the targeted screen-shooting channel.

Abstract

Screen-shooting robust watermarking aims to imperceptibly embed extractable information into host images such that the watermark survives the complex distortion pipeline of screen display and camera recapture. However, achieving high extraction accuracy while maintaining satisfactory visual quality remains an open challenge, primarily because the screen-shooting channel introduces severe and entangled degradations including Moir\'{e} patterns, color-gamut shifts, perspective warping, and sensor noise. In this paper, we present an end-to-end deep learning framework that jointly optimizes watermark embedding and extraction for screen-shooting robustness. Our framework incorporates three key innovations: (i) a comprehensive noise simulation layer that faithfully models realistic screen-shooting distortions -- notably including a physically-motivated Moir\'{e} pattern generator -- enabling the network to learn robust representations against the full spectrum of capture-channel noise through adversarial training; (ii) a Just Noticeable Distortion (JND) perceptual loss function that adaptively modulates watermark embedding strength by supervising the perceptual discrepancy between the JND coefficient map and the watermark residual, thereby concentrating watermark energy in perceptually insensitive regions to maximize visual quality; and (iii) two complementary automatic localization modules -- a semantic-segmentation-based foreground extractor for captured image rectification and a symmetric noise template mechanism for anti-cropping region recovery -- that enable fully automated watermark decoding under realistic deployment conditions. Extensive experiments demonstrate that our method achieves an average PSNR of 30.94~dB and SSIM of 0.94 on watermarked images while embedding 127-bit payloads.