Imperfectly Cooperative Human-AI Interactions: Comparing the Impacts of Human and AI Attributes in Simulated and User Studies
arXiv cs.AI / 4/20/2026
💬 OpinionModels & Research
Key Points
- The study investigates how both human personality traits (extraversion, agreeableness) and AI design characteristics (adaptability, expertise, and chain-of-thought transparency) jointly affect human–AI interaction quality in partially misaligned (“imperfectly cooperative”) scenarios.
- It compares two settings: 2,000 purely simulated runs and a parallel user study with 290 participants, covering hiring negotiations with AI hiring agents and transactions where AI agents may conceal information.
- The authors extend beyond standard performance metrics by using scenario-based outcomes, communication analysis, and questionnaire measures within a causal discovery framework.
- Results show notable divergences between simulation and human-subject data, and between the two scenario categories, with AI attributes—especially transparency—having a much larger effect in real human studies than in simulations.
- The paper concludes that the relative impact of “human vs. AI” factors depends strongly on interaction context, providing guidance for designing more human-centered AI agents.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to

Space now with memory
Dev.to