AI-Induced Human Responsibility (AIHR) in AI-Human teams
arXiv cs.AI / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how responsibility is assigned in AI-human team workflows where blame is unclear, using an AI-assisted lending scenario involving discrimination, irresponsible lending, and filing errors.
- Across four experiments totaling N=1,801 participants, people attributed significantly more responsibility to the human decision maker when paired with AI than when paired with another human (about a 10-point increase on a 0–100 scale).
- The AI-Induced Human Responsibility (AIHR) effect appeared in both high- and low-harm situations and persisted even in cases where participants might be expected to blame-shift self-servingly.
- Process evidence suggests AIHR is driven by perceptions of AI autonomy—AI is viewed as a constrained implementer—making the human the default locus of discretionary responsibility rather than mind-perception or self-threat explanations.
- The authors argue the results extend work on algorithm aversion and responsibility gaps by showing AI-human teaming can increase human accountability, informing how to design accountability in AI-enabled organizations.
Related Articles

Black Hat Asia
AI Business

I built the missing piece of the MCP ecosystem
Dev.to

When Agents Go Wrong: AI Accountability and the Payment Audit Trail
Dev.to

Google Gemma 4 Review 2026: The Open Model That Runs Locally and Beats Closed APIs
Dev.to

OpenClaw Deep Dive Guide: Self-Host Your Own AI Agent on Any VPS (2026)
Dev.to