The Persuasion Paradox: When LLM Explanations Fail to Improve Human-AI Team Performance

arXiv cs.AI / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a “Persuasion Paradox,” arguing that LLM explanations can increase users’ confidence and reliance without reliably improving—sometimes even reducing—task accuracy in human-AI teams.
  • Across three controlled studies (RAVEN visual reasoning and LSAT-style logical reasoning), explanation-based interfaces improved confidence but often failed to beat accuracy obtained from AI predictions alone, and they weakened users’ ability to correct model errors.
  • For visual reasoning, interfaces that show model uncertainty (e.g., predicted probabilities) and use selective automation to defer uncertain cases to humans achieved higher accuracy and better error recovery than explanations.
  • For language-based logical reasoning, however, LLM explanations produced the best accuracy and recovery, outperforming both probability-based support and expert-written explanations, indicating strong task-dependent effects.
  • The authors conclude that subjective measures like trust and perceived clarity are poor proxies for performance and recommend designing interaction systems that emphasize calibrated reliance and error recovery rather than persuasive fluency.

Abstract

While natural-language explanations from large language models (LLMs) are widely adopted to improve transparency and trust, their impact on objective human-AI team performance remains poorly understood. We identify a Persuasion Paradox: fluent explanations systematically increase user confidence and reliance on AI without reliably improving, and in some cases undermining, task accuracy. Across three controlled human-subject studies spanning abstract visual reasoning (RAVEN matrices) and deductive logical reasoning (LSAT problems), we disentangle the effects of AI predictions and explanations using a multi-stage reveal design and between-subjects comparisons. In visual reasoning, LLM explanations increase confidence but do not improve accuracy beyond the AI prediction alone, and substantially suppress users' ability to recover from model errors. Interfaces exposing model uncertainty via predicted probabilities, as well as a selective automation policy that defers uncertain cases to humans, achieve significantly higher accuracy and error recovery than explanation-based interfaces. In contrast, for language-based logical reasoning tasks, LLM explanations yield the highest accuracy and recovery rates, outperforming both expert-written explanations and probability-based support. This divergence reveals that the effectiveness of narrative explanations is strongly task-dependent and mediated by cognitive modality. Our findings demonstrate that commonly used subjective metrics such as trust, confidence, and perceived clarity are poor predictors of human-AI team performance. Rather than treating explanations as a universal solution, we argue for a shift toward interaction designs that prioritize calibrated reliance and effective error recovery over persuasive fluency.