Graph Neural Networks for Misinformation Detection: Performance-Efficiency Trade-offs

arXiv cs.CL / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper benchmarks lightweight graph neural networks (GCN, GraphSAGE, GAT, ChebNet) against non-graph baselines (logistic regression, SVM, MLP) for misinformation detection using identical TF-IDF inputs to isolate the benefit of relational structure.
  • Experiments across seven public English/Indonesian/Polish datasets show that GNNs consistently achieve higher F1 scores than non-graph methods while maintaining comparable or lower inference times.
  • Reported examples include GraphSAGE reaching 96.8% F1 on Kaggle and 91.9% on WELFake, versus 73.2% and 66.8% for MLP, respectively.
  • Results on COVID-19 and FakeNewsNet further reinforce the pattern, with GraphSAGE and ChebNet outperforming MLP under the same feature setup.
  • The authors argue that classical, efficient GNN architectures can deliver strong accuracy without requiring increasingly complex (and potentially costly) model designs.

Abstract

The rapid spread of online misinformation has led to increasingly complex detection models, including large language models and hybrid architectures. However, their computational cost and deployment limitations raise concerns about practical applicability. In this work, we benchmark graph neural networks (GNNs) against non-graph-based machine learning methods under controlled and comparable conditions. We evaluate lightweight GNN architectures (GCN, GraphSAGE, GAT, ChebNet) against Logistic Regression, Support Vector Machines, and Multilayer Perceptrons across seven public datasets in English, Indonesian, and Polish. All models use identical TF-IDF features to isolate the impact of relational structure. Performance is measured using F1 score, with inference time reported to assess efficiency. GNNs consistently outperform non-graph baselines across all datasets. For example, GraphSAGE achieves 96.8% F1 on Kaggle and 91.9% on WELFake, compared to 73.2% and 66.8% for MLP, respectively. On COVID-19, GraphSAGE reaches 90.5% F1 vs. 74.9%, while ChebNet attains 79.1% vs. 66.4% on FakeNewsNet. These gains are achieved with comparable or lower inference times. Overall, the results show that classic GNNs remain effective and efficient, challenging the need for increasingly complex architectures in misinformation detection.