When Do Hallucinations Arise? A Graph Perspective on the Evolution of Path Reuse and Path Compression
arXiv cs.AI / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes reasoning hallucinations in decoder-only Transformers by reframing next-token prediction as graph search over an underlying learned entity-transition graph.
- It distinguishes two modes of reasoning: contextual reasoning as constrained search in a sampled subgraph, versus context-free reasoning as reliance on memorized structures in the underlying graph.
- The authors identify two core mechanisms behind hallucinations: Path Reuse (memorized knowledge overriding contextual constraints early in training) and Path Compression (frequent multi-step paths collapsing into shortcut edges later in training).
- By unifying these mechanisms, the work offers an explanation for why hallucinations can be fluent yet inconsistent with both provided context and factual knowledge.
- The findings connect the proposed graph-theoretic training dynamics to behaviors reported in downstream LLM applications, suggesting broader relevance beyond the specific modeling framework.
Related Articles

Black Hat Asia
AI Business
[R] The ECIH: Model Modeling Agentic Identity as an Emergent Relational State [R]
Reddit r/MachineLearning
Google DeepMind Unveils Project Genie: The Dawn of Infinite AI-Generated Game Worlds
Dev.to
Artificial Intelligence and Life in 2030: The One Hundred Year Study onArtificial Intelligence
Dev.to
Stop waiting for Java to rebuild! AI IDEs + Zero-Latency Hot Reload = Magic
Dev.to