Leave No Stone Unturned: Uncovering Holistic Audio-Visual Intrinsic Coherence for Deepfake Detection

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HAVIC, a deepfake detector designed to exploit intrinsic coherence within and across audio and visual modalities rather than relying on unimodal artifacts or simple audio-visual discrepancies.
  • HAVIC is pretrained on authentic videos to learn priors of modality-specific structural coherence and inter-modal micro/macro coherence, then uses holistic adaptive aggregation to dynamically fuse audio-visual features.
  • The authors report that this approach improves generalization, including on cross-dataset tests where generator-specific artifact methods typically degrade.
  • They also release HiFi-AVDF, a high-fidelity audio-visual deepfake dataset covering both text-to-video and image-to-video forgeries generated by state-of-the-art commercial systems.
  • Experiments show HAVIC achieves sizable gains over prior state-of-the-art methods, including +9.39% AP and +9.37% AUC in the most challenging cross-dataset scenario, with code and data made publicly available.

Abstract

The rapid progress of generative AI has enabled hyper-realistic audio-visual deepfakes, intensifying threats to personal security and social trust. Most existing deepfake detectors rely either on uni-modal artifacts or audio-visual discrepancies, failing to jointly leverage both sources of information. Moreover, detectors that rely on generator-specific artifacts tend to exhibit degraded generalization when confronted with unseen forgeries. We argue that robust and generalizable detection should be grounded in intrinsic audio-visual coherence within and across modalities. Accordingly, we propose HAVIC, a Holistic Audio-Visual Intrinsic Coherence-based deepfake detector. HAVIC first learns priors of modality-specific structural coherence, inter-modal micro- and macro-coherence by pre-training on authentic videos. Based on the learned priors, HAVIC further performs holistic adaptive aggregation to dynamically fuse audio-visual features for deepfake detection. Additionally, we introduce HiFi-AVDF, a high-fidelity audio-visual deepfake dataset featuring both text-to-video and image-to-video forgeries from state-of-the-art commercial generators. Extensive experiments across several benchmarks demonstrate that HAVIC significantly outperforms existing state-of-the-art methods, achieving improvements of 9.39% AP and 9.37% AUC on the most challenging cross-dataset scenario. Our code and dataset are available at https://github.com/tuffy-studio/HAVIC.