LLM-MRD: LLM-Guided Multi-View Reasoning Distillation for Fake News Detection

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Introduces LLM-MRD, a teacher-student framework for multimodal fake news detection that leverages LLM-guided multi-view reasoning to improve accuracy and efficiency.
  • The Student module constructs a comprehensive foundation from textual, visual, and cross-modal perspectives, while the Teacher module provides deep reasoning chains as supervision signals.
  • A Calibration Distillation mechanism efficiently transfers the complex reasoning-derived knowledge from teacher to student to enable fast inference without sacrificing performance.
  • Empirical results show significant improvements over state-of-the-art baselines across datasets, with roughly 5.19% ACC and 6.33% F1-Fake gains, and code available at the authors' GitHub.

Abstract

Multimodal fake news detection is crucial for mitigating societal disinformation. Existing approaches attempt to address this by fusing multimodal features or leveraging Large Language Models (LLMs) for advanced reasoning. However, these methods suffer from serious limitations, including a lack of comprehensive multi-view judgment and fusion, and prohibitive reasoning inefficiency due to the high computational costs of LLMs. To address these issues, we propose \textbf{LLM}-Guided \textbf{M}ulti-View \textbf{R}easoning \textbf{D}istillation for Fake News Detection ( \textbf{LLM-MRD}), a novel teacher-student framework. The Student Multi-view Reasoning module first constructs a comprehensive foundation from textual, visual, and cross-modal perspectives. Then, the Teacher Multi-view Reasoning module generates deep reasoning chains as rich supervision signals. Our core Calibration Distillation mechanism efficiently distills this complex reasoning-derived knowledge into the efficient student model. Experiments show LLM-MRD significantly outperforms state-of-the-art baselines. Notably, it demonstrates a comprehensive average improvement of 5.19\% in ACC and 6.33\% in F1-Fake when evaluated across all competing methods and datasets. Our code is available at https://github.com/Nasuro55/LLM-MRD