CausalGaze: Unveiling Hallucinations via Counterfactual Graph Intervention in Large Language Models

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CausalGaze, a hallucination-detection framework that treats an LLM’s internal activations as a dynamic causal graph using structural causal models (SCMs).
  • Instead of passively classifying hallucinations from static internal signals, CausalGaze uses counterfactual graph interventions to separate causal reasoning paths from incidental noise and spurious correlations.
  • Experiments across four datasets and three common LLMs show consistent improvements, including an AUROC gain of over 5.2% on TruthfulQA versus state-of-the-art baselines.
  • The work aims to improve both hallucination detection performance and interpretability by making the causal mechanisms behind generation more inspectable.

Abstract

Despite the groundbreaking advancements made by large language models (LLMs), hallucination remains a critical bottleneck for their deployment in high-stakes domains. Existing classification-based methods mainly rely on static and passive signals from internal states, which often captures the noise and spurious correlations, while overlooking the underlying causal mechanisms. To address this limitation, we shift the paradigm from passive observation to active intervention by introducing CausalGaze, a novel hallucination detection framework based on structural causal models (SCMs). CausalGaze models LLMs' internal states as dynamic causal graphs and employs counterfactual interventions to disentangle causal reasoning paths from incidental noise, thereby enhancing model interpretability. Extensive experiments across four datasets and three widely used LLMs demonstrate the effectiveness of CausalGaze, especially achieving over 5.2\% improvement in AUROC on the TruthfulQA dataset compared to state-of-the-art baselines.