Can AI be a moral victim? The role of moral patiency and ownership perceptions in ethical judgments of using AI-generated content
arXiv cs.AI / 5/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study investigates how moral patiency (perceived ability to suffer) and ownership perceptions affect ethical judgments about reusing AI-generated content.
- In participants’ evaluations of highly similar manuscripts, copying AI-generated work was seen as less unethical, less plagiaristic, and less guilt-inducing than copying human-authored work.
- Mediation analyses indicate the “leniency” toward AI reuse comes from lower perceived harm-suffering capacity of AI and higher attribution of reuse ownership to a human writer.
- Anthropomorphic cues about AI (e.g., human-like naming) indirectly change moral evaluations by lowering perceived ownership, showing that framing can shift ethical responses.
Related Articles
Every handle invocation on BizNode gets a WFID — a universal transaction reference for accountability. Full audit trail,...
Dev.to
I deployed AI agents across AWS, GCP, and Azure without a VPN. Here is how it works.
Dev.to
Panduan Lengkap TestSprite MCP Server — Dokumentasi Getting Started dalam Bahasa Indonesia
Dev.to
AI made learning fun again
Dev.to
MCP, Skills, AI Agents, and New Models: The New Stack for Software Development
Dev.to