LiveFact: A Dynamic, Time-Aware Benchmark for LLM-Driven Fake News Detection

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • LiveFact(arXiv:2604.04815v1)は、時間に依存して証拠集合が変化する「動的・時間対応」ベンチマークで、静的ベンチのデータ汚染(BDC)や時系列不確実性への弱さを補うことを目的としています。
  • ベンチマークはデュアルモードで評価し、最終検証を行うClassification Modeと、進行中の不完全な証拠から推論するInference Modeを分けて測定します。
  • BDCを明示的にモニタリングするコンポーネントも提案されており、ベンチマークの信頼性を評価プロセスに組み込んでいます。
  • 22のLLMでのテストでは、Qwen3-235B-A22BのようなオープンソースMixture-of-Expertsが、プロプライエタリなSOTAに匹敵または上回る結果が示されています。
  • 分析では「reasoning gap」が見られ、強いモデルほど初期データでは検証不能な主張を見極めて“epistemic humility(慎重さ)”を示す点が、従来の静的ベンチでは捉えにくいことが強調されています。

Abstract

The rapid development of Large Language Models (LLMs) has transformed fake news detection and fact-checking tasks from simple classification to complex reasoning. However, evaluation frameworks have not kept pace. Current benchmarks are static, making them vulnerable to benchmark data contamination (BDC) and ineffective at assessing reasoning under temporal uncertainty. To address this, we introduce LiveFact a continuously updated benchmark that simulates the real-world "fog of war" in misinformation detection. LiveFact uses dynamic, temporal evidence sets to evaluate models on their ability to reason with evolving, incomplete information rather than on memorized knowledge. We propose a dual-mode evaluation: Classification Mode for final verification and Inference Mode for evidence-based reasoning, along with a component to monitor BDC explicitly. Tests with 22 LLMs show that open-source Mixture-of-Experts models, such as Qwen3-235B-A22B, now match or outperform proprietary state-of-the-art systems. More importantly, our analysis finds a significant "reasoning gap." Capable models exhibit epistemic humility by recognizing unverifiable claims in early data slices-an aspect traditional static benchmarks overlook. LiveFact sets a sustainable standard for evaluating robust, temporally aware AI verification.