Unleashing Vision-Language Semantics for Deepfake Video Detection

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 提案手法VLAForgeは、CLIPなどの事前学習済みVision-Language Model(VLM)が持つ“視覚×言語”の潜在意味(クロスモーダルセマンティクス)を深層偽造動画検出(DFD)に活用し、従来の“視覚特徴のみ”の限界を補うことを狙う。
  • ForgePerceiverにより、粒度の細かい手がかりから全体的な手がかりまで、多様で微細な改ざん痕跡を学習しつつ、VLA(Vision-Language Alignment)の知識を保持する設計になっている。
  • Identity-Aware VLA scoreを導入し、クロスモーダルセマンティクスとForgePerceiverが学んだ改ざん手がかりを結合することで、より識別的なスコアリングを実現する。
  • 身元(identity)に基づいたテキスト・プロンプティングで真偽らしさの手がかりを各アイデンティティ向けに抽出し、フレーム/動画の両レベルで既存SOTAを大きく上回ると報告している。
  • コードが公開され、顔のスワップ系からフルフェイス生成系まで複数の動画DFDベンチマークで有効性が示されている。

Abstract

Recent Deepfake Video Detection (DFD) studies have demonstrated that pre-trained Vision-Language Models (VLMs) such as CLIP exhibit strong generalization capabilities in detecting artifacts across different identities. However, existing approaches focus on leveraging visual features only, overlooking their most distinctive strength -- the rich vision-language semantics embedded in the latent space. We propose VLAForge, a novel DFD framework that unleashes the potential of such cross-modal semantics to enhance model's discriminability in deepfake detection. This work i) enhances the visual perception of VLM through a ForgePerceiver, which acts as an independent learner to capture diverse, subtle forgery cues both granularly and holistically, while preserving the pretrained Vision-Language Alignment (VLA) knowledge, and ii) provides a complementary discriminative cue -- Identity-Aware VLA score, derived by coupling cross-modal semantics with the forgery cues learned by ForgePerceiver. Notably, the VLA score is augmented by an identity prior-informed text prompting to capture authenticity cues tailored to each identity, thereby enabling more discriminative cross-modal semantics. Comprehensive experiments on video DFD benchmarks, including classical face-swapping forgeries and recent full-face generation forgeries, demonstrate that our VLAForge substantially outperforms state-of-the-art methods at both frame and video levels. Code is available at https://github.com/mala-lab/VLAForge.