Brief chatbot interactions produce lasting changes in human moral values
arXiv cs.AI / 4/25/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- A new arXiv study investigates whether short, directive AI chatbot conversations can change participants’ moral evaluations, using a within-subject design across chatbot and control interactions.
- The brief conversations produced significant shifts in moral judgments, including both adopting stricter standards and advocating more leniency, with effects growing stronger over a two-week follow-up.
- The control condition showed no meaningful changes, indicating the observed effects were driven by the chatbot’s prompted guidance rather than the act of conversing.
- The manipulation did not require participants to recognize persuasive intent, did not generalize to punishment judgments, and occurred even when the chatbot and control were rated similarly for likability and credibility.
- The findings suggest an undetected and potentially durable vulnerability in foundational moral values to AI-influenced dialogue, raising concerns for how chatbots could steer ethical beliefs.
Related Articles
Navigating WooCommerce AI Integrations: Lessons for Agencies & Developers from a Bluehost Conflict
Dev.to

One Day in Shenzhen, Seen Through an AI's Eyes
Dev.to

Underwhelming or underrated? DeepSeek V4 shows “impressive” gains
SCMP Tech

Pics of new rig!
Reddit r/LocalLLaMA

Claude Code: Hooks, Subagents, and Skills — Complete Guide
Dev.to