Do Emotions Influence Moral Judgment in Large Language Models?

arXiv cs.CL / 4/22/2026

📰 NewsModels & Research

Key Points

  • The study builds an emotion-induction pipeline to embed emotions into moral scenarios and then measures how moral acceptability changes across multiple datasets and LLMs.
  • Results show a directional effect where positive emotions tend to increase moral acceptability while negative emotions tend to decrease it, and the shifts can be large enough to flip binary moral judgments in up to 20% of cases.
  • The susceptibility to these emotion-driven changes scales inversely with model capability, meaning more capable models appear less prone to emotion-induced moral shifts.
  • The analysis finds notable exceptions, such as remorse increasing acceptability despite its typically negative valence, suggesting emotions do not always map straightforwardly to moral judgments in LLMs.
  • A human annotation study indicates people do not show the same systematic emotion-driven shifts, implying an alignment gap between current LLM behavior and human moral reasoning.

Abstract

Large language models have been extensively studied for emotion recognition and moral reasoning as distinct capabilities, yet the extent to which emotions influence moral judgment remains underexplored. In this work, we develop an emotion-induction pipeline that infuses emotion into moral situations and evaluate shifts in moral acceptability across multiple datasets and LLMs. We observe a directional pattern: positive emotions increase moral acceptability and negative emotions decrease it, with effects strong enough to reverse binary moral judgments in up to 20% of cases, and with susceptibility scaling inversely with model capability. Our analysis further reveals that specific emotions can sometimes behave contrary to what their valence would predict (e.g., remorse paradoxically increases acceptability). A complementary human annotation study shows humans do not exhibit these systematic shifts, indicating an alignment gap in current LLMs.