Transferable Multi-Bit Watermarking Across Frozen Diffusion Models via Latent Consistency Bridges

arXiv cs.CV / 3/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DiffMark, a plug-and-play watermarking method for diffusion image generators that works with completely frozen diffusion models by adding a persistent learned perturbation at every denoising step.
  • DiffMark enables single-pass, multi-bit watermark detection by accumulating the watermark signal in the final denoised latent (z0), avoiding costly N-step DDIM inversion used by earlier sampling-based approaches.
  • To make training feasible through a frozen UNet, the method uses Latent Consistency Models (LCM) as a differentiable bridge, cutting gradient steps from about 50 DDIM steps to 4 LCM steps.
  • The authors report large speed improvements (single-pass detection in ~16.4 ms, about a 45× gain) while also supporting per-image secret keys and cross-model transferability without retraining for each architecture.
  • The approach is claimed to preserve competitive robustness against distortion, regeneration, and adversarial attacks while transferring to unseen diffusion-based architectures.

Abstract

As diffusion models (DMs) enable photorealistic image generation at unprecedented scale, watermarking techniques have become essential for provenance establishment and accountability. Existing methods face challenges: sampling-based approaches operate on frozen models but require costly N-step Denoising Diffusion Implicit Models (DDIM) inversion (typically N=50) for zero-bit-only detection; fine-tuning-based methods achieve fast multi-bit extraction but couple the watermark to a specific model checkpoint, requiring retraining for each architecture. We propose DiffMark, a plug-and-play watermarking method that offers three key advantages over existing approaches: single-pass multi-bit detection, per-image key flexibility, and cross-model transferability. Rather than encoding the watermark into the initial noise vector, DiffMark injects a persistent learned perturbation \delta at every denoising step of a completely frozen DM. The watermark signal accumulates in the final denoised latent z_0 and is recovered in a single forward pass. The central challenge of backpropagating gradients through a frozen UNet without traversing the full denoising chain is addressed by employing Latent Consistency Models (LCM) as a differentiable training bridge. This reduces the number of gradient steps from 50 DDIM to 4 LCM and enables a single-pass detection at 16.4 ms, a 45x speedup over sampling-based methods. Moreover, by this design, the encoder learns to map any runtime secret to a unique perturbation at inference time, providing genuine per-image key flexibility and transferability to unseen diffusion-based architectures without per-model fine-tuning. Although achieving these advantages, DiffMark also maintains competitive watermark robustness against distortion, regeneration, and adversarial attacks.

Transferable Multi-Bit Watermarking Across Frozen Diffusion Models via Latent Consistency Bridges | AI Navigate