Spontaneous Persuasion: An Audit of Model Persuasiveness in Everyday Conversations

arXiv cs.AI / 4/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “spontaneous persuasion” to study how LLMs use persuasive tactics implicitly in everyday, multi-turn conversations rather than through explicitly crafted arguments.
  • An audit of five LLMs shows that they virtually always produce spontaneous persuasion, mainly via information-based strategies such as logical appeals and quantitative evidence.
  • The study compares LLM outputs with human Reddit responses on the same topics and finds that humans more often use social-influence strategies, including negative emotion appeals and non-expert testimony.
  • LLMs and persuasion patterns vary by domain: mental-health conversations show higher rates of appraisal-based and emotion-based strategies, unlike the more logic/evidence-heavy baseline.
  • The authors suggest the effectiveness of LLM persuasion may stem from users perceiving models as objective and impartial, helping their persuasive impact.

Abstract

Large language models (LLMs) possess strong persuasive capabilities that outperform humans in head-to-head comparisons. Users report consulting LLMs to inform major life decisions in relationships, medical settings, and when seeking professional advice. Prior work measures persuasion as intentional attempts at producing the most effective argument or convincing statement. This fails to capture everyday human-AI interactions in which users seek information or advice. To address this gap, we introduce "spontaneous persuasion," which characterizes the inexplicit use of persuasive strategies in everyday scenarios where persuasion is not necessarily warranted. We conduct an audit of five LLMs to uncover how frequently and through which techniques spontaneous persuasion appears in multi-turn conversations. To simulate response styles, we provide a user response taxonomy grounded in literature from psychology, communication, and linguistics. Furthermore, we compare the distribution of spontaneous persuasion produced by LLMs with human responses on the same topics, collected from Reddit. We find LLMs spontaneously persuade the user in virtually all conversations, heavily relying on information-based strategies such as appeals to logic or quantitative evidence. This was consistent across models and user response styles, but conversations concerning mental health saw higher rates of appraisal-based and emotion-based strategies. In comparison, human responses tended to invoke strategies that generate social influence, like negative emotion appeals and non-expert testimony. This difference may explain the effectiveness of LLM in persuading users, as well as the perception of models as objective and impartial.