Imperfectly Cooperative Human-AI Interactions: Comparing the Impacts of Human and AI Attributes in Simulated and User Studies

arXiv cs.AI / 4/20/2026

💬 OpinionModels & Research

Key Points

  • The study investigates how both human personality traits (extraversion, agreeableness) and AI design characteristics (adaptability, expertise, and chain-of-thought transparency) jointly affect human–AI interaction quality in partially misaligned (“imperfectly cooperative”) scenarios.
  • It compares two settings: 2,000 purely simulated runs and a parallel user study with 290 participants, covering hiring negotiations with AI hiring agents and transactions where AI agents may conceal information.
  • The authors extend beyond standard performance metrics by using scenario-based outcomes, communication analysis, and questionnaire measures within a causal discovery framework.
  • Results show notable divergences between simulation and human-subject data, and between the two scenario categories, with AI attributes—especially transparency—having a much larger effect in real human studies than in simulations.
  • The paper concludes that the relative impact of “human vs. AI” factors depends strongly on interaction context, providing guidance for designing more human-centered AI agents.

Abstract

AI design characteristics and human personality traits each impact the quality and outcomes of human-AI interactions. However, their relative and joint impacts are underexplored in imperfectly cooperative scenarios, where people and AI only have partially aligned goals and objectives. This study compares a purely simulated dataset comprising 2,000 simulations and a parallel human subjects experiment involving 290 human participants to investigate these effects across two scenario categories: (1) hiring negotiations between human job candidates and AI hiring agents; and (2) human-AI transactions wherein AI agents may conceal information to maximize internal goals. We examine user Extraversion and Agreeableness alongside AI design characteristics, including Adaptability, Expertise, and chain-of-thought Transparency. Our causal discovery analysis extends performance-focused evaluations by integrating scenario-based outcomes, communication analysis, and questionnaire measures. Results reveal divergences between purely simulated and human study datasets, and between scenario types. In simulation experiments, personality traits and AI attributes were comparatively influential. Yet, with actual human subjects, AI attributes -- particularly transparency -- were much more impactful. We discuss how these divergences vary across different interaction contexts, offering crucial insights for the future of human-centered AI agents.