Anthropomorphism and Trust in Human-Large Language Model interactions

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study analyzes how people anthropomorphize and assign trust to large language models using over 2,000 human–LLM interaction instances collected from an experiment with 115 participants.
  • Perceived warmth (friendliness) and cognitive empathy were significant predictors of anthropomorphism, trust, perceived similarity, relational closeness, frustration, and usefulness across outcomes.
  • Perceived competence (capability and coherence) predicted most outcomes, but it did not significantly affect anthropomorphism, suggesting different mechanisms for “human-likeness” versus other judgments.
  • Affective empathy mainly influenced relational perceptions but did not predict epistemic outcomes such as trust-related knowledge judgments.
  • For more subjective, personally relevant topics (e.g., relationship advice), participants showed stronger human-likeness and relational connection with the LLM than for objective topics.

Abstract

With large language models (LLMs) becoming increasingly prevalent in daily life, so too has the tendency to attribute to them human-like minds and emotions, or anthropomorphize them. Here, we investigate dimensions people use to anthropomorphize and attribute trust toward LLMs across more than 2,000 human-LLM interactions. Participants (N=115) engaged with LLM chatbots systematically varied in warmth (friendliness), competence (capability, coherence), and empathy (cognitive and affective). Warmth and cognitive empathy significantly predicted perceptions on all outcomes (perceived anthropomorphism, trust, similarity, relational closeness, frustration, usefulness), while competence predicted all outcomes except for anthropomorphism. Affective empathy primarily predicted perceived relational measures, but did not predict the epistemic outcomes. Topic sub-analyses showed that more subjective, personally relevant topics (e.g., relationship advice) amplified these effects, producing greater human-likeness and relational connection with the LLM than did objective topics. Together, these findings reveal that warmth, competence, and empathy are key dimensions through which people attribute relational and epistemic perceptions to artificial agents.