Anthropomorphism and Trust in Human-Large Language Model interactions
arXiv cs.AI / 4/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study analyzes how people anthropomorphize and assign trust to large language models using over 2,000 human–LLM interaction instances collected from an experiment with 115 participants.
- Perceived warmth (friendliness) and cognitive empathy were significant predictors of anthropomorphism, trust, perceived similarity, relational closeness, frustration, and usefulness across outcomes.
- Perceived competence (capability and coherence) predicted most outcomes, but it did not significantly affect anthropomorphism, suggesting different mechanisms for “human-likeness” versus other judgments.
- Affective empathy mainly influenced relational perceptions but did not predict epistemic outcomes such as trust-related knowledge judgments.
- For more subjective, personally relevant topics (e.g., relationship advice), participants showed stronger human-likeness and relational connection with the LLM than for objective topics.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to