Are DeepFakes Realistic Enough? Exploring Semantic Mismatch as a Novel Challenge

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that many DeepFake detection benchmarks use overly simple binary setups and fail to capture realistic variations in how manipulations occur across audio and video.
  • It proposes a new evaluation scenario (RARV-SMM) that explicitly tests semantic-level inconsistency between authentic audio and authentic video, beyond existing four-class audio-visual formulations.
  • Experiments on FakeAVCeleb show that state-of-the-art models struggle when the DeepFake signal is present in the content rather than the data source integrity.
  • The authors introduce RARV-SMM variants to reveal different architectural weaknesses as audio-visual divergence increases, and they also propose a semantic reinforcement approach using semantic mismatch modeling plus ImageBind embeddings to improve detection performance.

Abstract

Current DeepFake detection scenarios are mostly binary, yet data manipulation can vary across audio, video, or both, whose variability is not captured in binary settings. Four-class audio-visual formulations address this by discriminating manipulation type, but introduce a unresolved problem: models may rely solely on data source integrity to detect DeepFakes without evaluating their semantic consistency. If the DeepFake origin is not in the data source but in its content, can semantic mismatch be assessed by the state-of-the-art? This paper proposes a new evaluation setup, extending the four-class formulation by explicitly modeling semantic-level inconsistency between authentic modalities with the introduction a new class: Real Audio-Real Video with Semantic Mismatch (RARV-SMM). We assess the robustness of state-of-the-art models in this new realistic DeepFake setting, using the FakeAVCeleb dataset, highlighting the limitations of existing approaches when faced with semantic mismatch data. We further introduce three RARV-SMM variants that expose distinct architectural vulnerabilities as audio-visual divergence increases. We also propose a semantic reinforcement strategy that incorporates the semantic mismatch class and ImageBind embeddings to improve DeepFake detection in both our proposed and state-of-the-art settings, on FakeAVCeleb and LAV-DF, paving the way to more realistic DeepFake detectors. The source code and data are available at https://github.com/.