Do Emotions Influence Moral Judgment in Large Language Models?
arXiv cs.CL / 4/22/2026
📰 NewsModels & Research
Key Points
- The study builds an emotion-induction pipeline to embed emotions into moral scenarios and then measures how moral acceptability changes across multiple datasets and LLMs.
- Results show a directional effect where positive emotions tend to increase moral acceptability while negative emotions tend to decrease it, and the shifts can be large enough to flip binary moral judgments in up to 20% of cases.
- The susceptibility to these emotion-driven changes scales inversely with model capability, meaning more capable models appear less prone to emotion-induced moral shifts.
- The analysis finds notable exceptions, such as remorse increasing acceptability despite its typically negative valence, suggesting emotions do not always map straightforwardly to moral judgments in LLMs.
- A human annotation study indicates people do not show the same systematic emotion-driven shifts, implying an alignment gap between current LLM behavior and human moral reasoning.
Related Articles
I’m working on an AGI and human council system that could make the world better and keep checks and balances in place to prevent catastrophes. It could change the world. Really. Im trying to get ahead of the game before an AGI is developed by someone who only has their best interest in mind.
Reddit r/artificial
Deepseek V4 Flash and Non-Flash Out on HuggingFace
Reddit r/LocalLLaMA

DeepSeek V4 Flash & Pro Now out on API
Reddit r/LocalLLaMA

From "Hello World" to "Hello Agents": The Developer Keynote That Rewired Software Engineering
Dev.to

AI swarms could hijack democracy without anyone noticing
Reddit r/artificial