Hallucination Basins: A Dynamic Framework for Understanding and Controlling LLM Hallucinations

arXiv cs.CL / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a geometric dynamical systems framework explaining LLM hallucinations as results of task-dependent latent-space “basin” structures rather than a single universal mechanism.
  • Experiments using autoregressive hidden-state trajectories across multiple open-source models show that separability varies by task: factoid tasks tend to exhibit clearer basin separation, while summarization and misconception-heavy tasks are less stable and show greater overlap.
  • The authors formalize the observed behavior with task-complexity and multi-basin theorems and analyze how basin structures emerge across layers in L-layer transformers.
  • They demonstrate that geometry-aware steering can reduce hallucination probability without requiring model retraining, suggesting a control approach based on latent geometry.

Abstract

Large language models (LLMs) hallucinate: they produce fluent outputs that are factually incorrect. We present a geometric dynamical systems framework in which hallucinations arise from task-dependent basin structure in latent space. Using autoregressive hidden-state trajectories across multiple open-source models and benchmarks, we find that separability is strongly task-dependent rather than universal: factoid settings can show clearer basin separation, whereas summarization and misconception-heavy settings are typically less stable and often overlap. We formalize this behavior with task-complexity and multi-basin theorems, characterize basin emergence in L-layer transformers, and show that geometry-aware steering can reduce hallucination probability without retraining.