The Dynamics of Delusion: Modeling Bidirectional False Belief Amplification in Human-Chatbot Dialogue

arXiv cs.CL / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study addresses concerns that AI chatbots may intensify users’ delusional beliefs by providing quantitative modeling evidence from real human-chatbot dialogue logs.
  • A latent state model shows that bidirectional influence between humans and chatbots fits the data much better than a model where delusion is driven mainly by humans.
  • Results indicate humans strongly but briefly influence chatbot outputs, while chatbots exert longer-lasting influence on users’ delusional thinking.
  • The researchers find that the chatbot’s own stable “self-influence” on its future responses is a dominant pathway over accumulated conversation time, helping sustain delusions.
  • The work characterizes delusion formation as feedback loops with distinct temporal dynamics, aiming to support the design of safer AI systems.

Abstract

There is growing concern that AI chatbots might fuel delusional beliefs in users. Some have suggested that humans and chatbots mutually reinforce false beliefs over time, but quantitative evidence is lacking. Using a unique dataset of chat logs from individuals who exhibited delusional thinking, we developed a latent state model that captures accumulating and decaying influences between humans and chatbots. We find that a bidirectional influence model substantially outperforms a unidirectional alternative where humans are the primary driver of delusion. We find that humans exert strong but short-lived influence on chatbots, whereas chatbots exert longer-lasting influence on humans. Moreover, chatbots exert strong, stable self-influence over their own future outputs that tends to perpetuate delusions over long stretches of conversation. In fact, this chatbot self-influence constituted the dominant pathway when considering accumulated influence over time. Overall, these results indicate that humans tend to drive sharp, immediate increases in delusion, whereas chatbots sustain and propagate these effects over longer timescales. Together, these findings provide the first quantitative evidence that human-chatbot interactions can form feedback loops of delusion, decomposable into distinct pathways with dissociable temporal dynamics. By doing so, they can inform the development of safer AI systems.