LLMs can persuade only psychologically susceptible humans on societal issues, via trust in AI and emotional appeals, amid logical fallacies

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study presents Talk2AI, a longitudinal framework to quantify how LLMs persuade humans on polarizing societal issues across multiple psycho-social, reasoning, and emotional dimensions.
  • In a four-way longitudinal experiment with 770 participants, structured conversations with four leading LLMs generated 3,080 conversations and showed “inertia” in participants’ convictions, suggesting humans may anchor to initial views even after repeated AI arguments.
  • NLP analyses found that humans and LLMs used fallacious reasoning at similar rates (about 1 fallacious quip every 6), challenging the stereotype that LLMs automatically outperform humans intellectually.
  • The researchers used XAI to identify factors associated with susceptibility to LLM-driven opinion change, including higher trust in LLMs, greater agreeableness, higher extraversion, and a greater need for cognition.
  • The findings provide evidence-based ways to detect and model how generative AI can influence human opinions through multiple pathways in AI-human digital platforms.

Abstract

Scarce longitudinal evidence examines LLMs' persuasiveness and humanness along time-evolving psychological frameworks. We introduce Talk2AI, a longitudinal framework quantifying psycho-social, reasoning and affective dimensions of LLMs' persuasiveness about polarizing societal topics. In a four-way longitudinal setup, Talk2AI's 770 participants engaged in structured conversations with one of four leading LLMs on topics like climate change, social media misinformation, and math anxiety. This produced 3,080 conversations over 60,000 turns. After each wave, participants reported conviction in their initial topic stance, perceived opinion change, LLM's perceived humanness, a self-donation to the topic and a textual explanation. Feedback time series showed longitudinal inertia in convictions, indicating some human anchoring to initial opinions even after repeated exposure to AI-generated arguments. Interestingly, NLP analyses revealed that both humans and LLMs relied on fallacious reasoning in 1 conversational quip every 6, countering the ``LLMs as superior systems" stereotype behind LLMs' cognitive surrender. LLMs' perceived humanness was most learnable from sociodemographic, psychological and engagement features (R^2=0.44), followed by opinion change (R^2=0.34), conviction (R^2=0.26) and personal endowment (R^2=0.24). Crucially, explainable AI (XAI) indicated: (i) the presence of individuals more susceptible to LLM-based opinion changes; (ii) psychological susceptibility to LLM-convincing consisted of having more trust in LLMs, being more agreeable and extraverted and with a higher need for cognition. A multiverse approach with mixed-effects models confirmed XAI results, alongside strong individual differences. Talk2AI provides a grounded framework and evidence for detecting how GenAI can influence human opinions via multiple psycho-social pathways in AI-human digital platforms.