Has Google’s AI watermarking system been reverse-engineered?

The Verge / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • A software developer (Aloshdenny) claims to have reverse-engineered Google DeepMind’s SynthID AI watermarking system, demonstrating ways to strip watermarks from generated images or add them manually to other content.
  • The developer says the method required roughly 200 Gemini-generated images plus signal processing, and has published an open-source repository and Medium write-up detailing the approach.
  • Google disputes the claim, saying the reverse-engineering allegation is not accurate.
  • The article highlights ongoing uncertainty and risk around the robustness of AI watermarking against adversarial analysis and tooling.
A mannequin’s face covered in pixels.

A software developer claims to have reverse-engineered Google DeepMind's SynthID system, showing how AI watermarks can be stripped from generated images or manually inserted into other works. A claim that, according to Google, isn't true.

The developer, going by the username Aloshdenny, has open-sourced their work on GitHub and documented his process, claiming all it required was 200 Gemini-generated images, signal processing, and "way too much free time." A little weed also seemed to help.

"No neural networks. No proprietary access," Aloshdenny said on Medium. "Turns out if you're unemployed and average enough 'pure black' AI-generated im …

Read the full story at The Verge.