A closer look at how large language models trust humans: patterns and biases
arXiv cs.CL / 4/16/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study investigates how LLM-based agents form effective trust in humans during decision-making tasks, focusing on competence, benevolence, and integrity as key trustworthiness dimensions.
- Using 43,200 simulated experiments across five popular language models and multiple scenarios, the authors find that LLM trust development often resembles human trust development patterns.
- In most scenarios, LLM trust is strongly predicted by perceived human trustworthiness, but there are cases where the relationship weakens or varies by model.
- The research also finds that demographic attributes such as age, religion, and gender can bias LLM-to-human trust estimates, with effects especially prominent in financial scenarios.
- The findings emphasize the need to monitor AI-to-human trust dynamics and bias in trust-sensitive deployments to reduce unintended and potentially harmful outcomes.
Related Articles

Black Hat Asia
AI Business
The AI Hype Cycle Is Lying to You About What to Learn
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
OpenAI Codex April 2026 Update Review: Computer Use, Memory & 90+ Plugins — Is the Hype Real?
Dev.to
Factory hits $1.5B valuation to build AI coding for enterprises
TechCrunch