The only way to fight deepfakes is by making deepfakes

The Verge / 4/17/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical Usage

Key Points

  • The article describes an attempt to test deepfake deception by using an AI-generated voice that closely imitates someone, and notes that early results can fail when audio conditions (like delays or crosstalk) are imperfect.
  • It argues that effective counter-deepfake defenses may require generating realistic deepfakes as part of the detection and verification process, rather than relying only on “spotting” fakes.
  • The story frames deepfake detection as a practical reality problem where adversarial quality, communication context, and signal conditions matter as much as the underlying model.
  • It highlights the need for defenses that are trained and evaluated against the kinds of deepfakes humans are most likely to encounter in real scenarios.
A mannequin’s face covered in pixels.

I was unsure if my parents would notice that the voice on the other end wasn't mine - or that it was mine, sort of, but it wasn't me. The voice said hello, asked my dad how he was doing, and asked again when he didn't respond quickly enough. "What is that, Gaby?" He realized something was wrong almost immediately. I explained I had tried to trick him and it clearly hadn't worked. "It didn't," he said. "It sounded like a robot."

It wasn't a perfect experiment. My parents were out of the country, which made for a shoddy connection. They were having lunch with friends, and the voice couldn't deal with crosstalk or delays in the audio - it trie …

Read the full story at The Verge.