DiffusionPrint: Learning Generative Fingerprints for Diffusion-Based Inpainting Localization
arXiv cs.CV / 4/15/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Diffusion-based inpainting can undermine existing image forgery localization methods by regenerating entire images through latent decoders that erase camera-level noise patterns used for forensics.
- The paper introduces DiffusionPrint, a patch-level contrastive learning framework that learns a robust “generative fingerprint” forensic signal resilient to spectral distortions from latent decoding.
- DiffusionPrint uses self-supervision by leveraging the observation that inpainted regions produced by the same model exhibit consistent fingerprints, training a convolutional backbone with a MoCo-style objective and hard negative mining.
- It outputs a discriminative forensic feature map intended to act as a secondary modality in fusion-based IFL pipelines, improving localization when integrated into TruFor, MMFusion, and a lightweight baseline.
- Reported results show consistent gains across multiple generative models, including up to +28% on unseen mask types and generalization to unseen generative architectures, with code released on GitHub.
Related Articles

Black Hat Asia
AI Business
Are gamers being used as free labeling labor? The rise of "Simulators" that look like AI training grounds [D]
Reddit r/MachineLearning

I built a trading intelligence MCP server in 2 days — here's how
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Qwen3.5-35B running well on RTX4060 Ti 16GB at 60 tok/s
Reddit r/LocalLLaMA