AI Navigate

Exposing Cross-Modal Consistency for Fake News Detection in Short-Form Videos

arXiv cs.AI / 3/17/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • On two benchmarks, real short-form videos show high text-visual consistency and moderate text-audio consistency, while fake videos exhibit the reverse pattern.
  • The authors introduce MAGIC3, a detector that explicitly models cross-tri-modal (text, visuals, audio) consistency using both explicit pairwise and global signals derived from cross-modal attention.
  • MAGIC3 incorporates multi-style LLM rewrites to produce style-robust text representations and an uncertainty-aware classifier to enable selective routing through a visual-language model (VLM) pathway.
  • On FakeSV and FakeTT, MAGIC3 matches VLM-level accuracy while delivering 18-27× higher throughput and 93% VRAM savings, offering a strong cost-performance trade-off.

Abstract

Short-form video platforms are major channels for news but also fertile ground for multimodal misinformation where each modality appears plausible alone yet cross-modal relationships are subtly inconsistent, like mismatched visuals and captions. On two benchmark datasets, FakeSV (Chinese) and FakeTT (English), we observe a clear asymmetry: real videos exhibit high text-visual but moderate text-audio consistency, while fake videos show the opposite pattern. Moreover, a single global consistency score forms an interpretable axis along which fake probability and prediction errors vary smoothly. Motivated by these observations, we present MAGIC3 (Modal-Adversarial Gated Interaction and Consistency-Centric Classifier), a detector that explicitly models and exposes cross-tri-modal consistency signals at multiple granularities. MAGIC3 combines explicit pairwise and global consistency modeling with token- and frame-level consistency signals derived from cross-modal attention, incorporates multi-style LLM rewrites to obtain style-robust text representations, and employs an uncertainty-aware classifier for selective VLM routing. Using pre-extracted features, MAGIC3 consistently outperforms the strongest non-VLM baselines on FakeSV and FakeTT. While matching VLM-level accuracy, the two-stage system achieves 18-27x higher throughput and 93% VRAM savings, offering a strong cost-performance tradeoff.