Exploring Knowledge Conflicts for Faithful LLM Reasoning: Benchmark and Method

arXiv cs.CL / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ConflictQA, a new benchmark designed to test “knowledge conflicts” in LLM reasoning specifically between textual evidence and knowledge-graph (KG) evidence.
  • Prior research mainly examined conflicts between retrieved external knowledge and a model’s internal (parametric) knowledge, while this work targets cross-source conflicts across multiple external knowledge forms.
  • Experiments across representative LLMs show that when faced with conflicting textual and KG evidence, models frequently fail to select reliable evidence and often produce incorrect answers.
  • The study finds that cross-source conflicts make LLM behavior more sensitive to prompting, with models tending to over-rely on either KG or text rather than integrating both.
  • To address these issues, the authors propose XoT, a two-stage explanation-based thinking framework for heterogeneous conflicting evidence, and validate its effectiveness through extensive evaluations.

Abstract

Large language models (LLMs) have achieved remarkable success across a wide range of applications especially when augmented by external knowledge through retrieval-augmented generation (RAG). Despite their widespread adoption, recent studies have shown that LLMs often struggle to perform faithful reasoning when conflicting knowledge is retrieved. However, existing work primarily focuses on conflicts between external knowledge and the parametric knowledge of LLMs, leaving conflicts across external knowledge largely unexplored. Meanwhile, modern RAG systems increasingly emphasize the integration of unstructured text and (semi-)structured data like knowledge graphs (KGs) to improve knowledge completeness and reasoning faithfulness. To address this gap, we introduce ConflictQA, a novel benchmark that systematically instantiates conflicts between textual evidence and KG evidence. Extensive evaluations across representative LLMs reveal that, facing such cross-source conflicts, LLMs often fail to identify reliable evidence for correct reasoning. Instead, LLMs become more sensitive to prompting choices and tend to rely exclusively on either KG or textual evidence, resulting in incorrect responses. Based on these findings, we further propose XoT, a two-stage explanation-based thinking framework tailored for reasoning over heterogeneous conflicting evidence, and verify its effectiveness with extensive experiments.