Human Attribution of Causality to AI Across Agency, Misuse, and Misalignment
arXiv cs.AI / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates how people assign blame, causality, foreseeability, and counterfactual reasoning in AI-involved harms through human experiments.
- It finds that higher AI agency (AI sets goals and means) increases perceived AI causal responsibility, while low AI agency shifts blame toward humans.
- Reversing roles between human and AI still leads participants to judge the human as more causal, indicating a robustness of human-centered attribution biases.
- Developers are judged highly causal even when distant in the causal chain, reducing attributions to human users but not to AI.
- Decomposing AI into a language model and an agentic component shows the agentic part is judged more causal, highlighting perceived autonomy as a key driver in liability assessments and informing AI harm liability frameworks.
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to

5 Dangerous Lies Behind Viral AI Coding Demos That Break in Production
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to

OpenTelemetry just standardized LLM tracing. Here's what it actually looks like in code.
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to