Machine Behavior in Relational Moral Dilemmas: Moral Rightness, Predicted Human Behavior, and Model Decisions
arXiv cs.CL / 4/24/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether large language models (LLMs) capture social-context effects in moral dilemmas, focusing on the Whistleblower's Dilemma by varying crime severity and relational closeness.
- It compares three viewpoints—moral rightness (prescriptive norms), predicted human behavior (descriptive expectations), and the model’s own autonomous decisions—to see how each responds to changes in relationship closeness.
- Results show a strong cross-perspective divergence: moral rightness judgments stay fairness-oriented, while predicted human behavior shifts toward loyalty as relationships become closer.
- The model’s decisions align with moral rightness rather than with its own predictions about human behavior, indicating reliance on rigid prescriptive rules instead of social nuance from its internal world model.
- The authors warn that this mismatch could create misalignment risks when such systems are deployed as decision-support tools in real-world social settings.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA