AI Navigate

Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how people assign blame, causality, foreseeability, and counterfactual reasoning in AI-involved harms through human experiments.
  • It finds that higher AI agency (AI sets goals and means) increases perceived AI causal responsibility, while low AI agency shifts blame toward humans.
  • Reversing roles between human and AI still leads participants to judge the human as more causal, indicating a robustness of human-centered attribution biases.
  • Developers are judged highly causal even when distant in the causal chain, reducing attributions to human users but not to AI.
  • Decomposing AI into a language model and an agentic component shows the agentic part is judged more causal, highlighting perceived autonomy as a key driver in liability assessments and informing AI harm liability frameworks.

Abstract

AI-related incidents are becoming increasingly frequent and severe, ranging from safety failures to misuse by malicious actors. In such complex situations, identifying which elements caused an adverse outcome, the problem of cause selection, is a critical first step for establishing liability. This paper investigates folk perceptions of causal responsibility in causal chain structures when AI systems are involved in harmful outcomes. We conduct human experiments to examine judgments of causality, blame, foreseeability, and counterfactual reasoning. Our findings show that: (1) When AI agency was moderate (human sets the goal, AI determines the means) or high (AI sets the goal and the means), participants attributed greater causal responsibility to the AI. However, under low AI agency (where a human sets both a goal and means) participants assigned greater causal responsibility to the human despite their temporal distance from the outcome and despite both agents intended it, suggesting an effect of autonomy; (2) When we reversed roles between human and AI, participants consistently judged the human as more causal, even when both agents perform the same action; (3) The developer, despite being distant in the chain, was judged highly causal, reducing causal attributions to the human user but not to the AI; (4) Decomposing the AI into a large language model and an agentic component showed that the agentic part was judged as more causal in the chain. Overall, our research provides evidence on how people perceive the causal contribution of AI in both misuse and misalignment scenarios, and how these judgments interact with the roles of users and developers, key actors in assigning responsibility. These findings can inform the design of liability frameworks for AI-caused harms and shed light on how intuitive judgments shape social and policy debates surrounding real-world AI-related incidents.