A Graph-Enhanced Defense Framework for Explainable Fake News Detection with LLM

arXiv cs.CL / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes G-Defense, a graph-enhanced framework for explainable fake news detection that generates veracity judgments and human-friendly explanations.
  • It decomposes each news claim into sub-claims, builds a dependency structure as a claim-centered graph, and uses retrieval-augmented generation (RAG) to retrieve evidence for each sub-claim.
  • A defense-like inference module operating on the graph evaluates overall claim veracity, aiming to reduce the risk of inaccuracies from unverified externally retrieved reports.
  • The framework produces an intuitive explanation graph by prompting an LLM, designed to cover all aspects of a claim to help public verification.
  • Experiments report state-of-the-art performance for both veracity detection and explanation quality compared with prior approaches.

Abstract

Explainable fake news detection aims to assess the veracity of news claims while providing human-friendly explanations. Existing methods incorporating investigative journalism are often inefficient and struggle with breaking news. Recent advances in large language models (LLMs) enable leveraging externally retrieved reports as evidence for detection and explanation generation, but unverified reports may introduce inaccuracies. Moreover, effective explainable fake news detection should provide a comprehensible explanation for all aspects of a claim to assist the public in verifying its accuracy. To address these challenges, we propose a graph-enhanced defense framework (G-Defense) that provides fine-grained explanations based solely on unverified reports. Specifically, we construct a claim-centered graph by decomposing the news claim into several sub-claims and modeling their dependency relationships. For each sub-claim, we use the retrieval-augmented generation (RAG) technique to retrieve salient evidence and generate competing explanations. We then introduce a defense-like inference module based on the graph to assess the overall veracity. Finally, we prompt an LLM to generate an intuitive explanation graph. Experimental results demonstrate that G-Defense achieves state-of-the-art performance in both veracity detection and the quality of its explanations.