The Dynamics of Delusion: Modeling Bidirectional False Belief Amplification in Human-Chatbot Dialogue
arXiv cs.CL / 4/29/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study addresses concerns that AI chatbots may intensify users’ delusional beliefs by providing quantitative modeling evidence from real human-chatbot dialogue logs.
- A latent state model shows that bidirectional influence between humans and chatbots fits the data much better than a model where delusion is driven mainly by humans.
- Results indicate humans strongly but briefly influence chatbot outputs, while chatbots exert longer-lasting influence on users’ delusional thinking.
- The researchers find that the chatbot’s own stable “self-influence” on its future responses is a dominant pathway over accumulated conversation time, helping sustain delusions.
- The work characterizes delusion formation as feedback loops with distinct temporal dynamics, aiming to support the design of safer AI systems.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

How I Use AI Agents to Maintain a Living Knowledge Base for My Team
Dev.to
IK_LLAMA now supports Qwen3.5 MTP Support :O
Reddit r/LocalLLaMA
OpenAI models, Codex, and Managed Agents come to AWS
Dev.to

Indian Developers: How to Build AI Side Income with $0 Capital in 2026
Dev.to

Vertical SaaS for Startups 2026: Building a Niche AI-First Product
Dev.to