EthicMind: A Risk-Aware Framework for Ethical-Emotional Alignment in Multi-Turn Dialogue

arXiv cs.CL / 4/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a turn-level decision framing for “ethical-emotional alignment” in multi-turn dialogue, motivated by failures that occur when ethical safety and emotional attunement are handled separately.
  • It proposes EthicMind, a risk-aware inference-time framework that jointly considers ethical risk signals and evolving user emotion to plan response strategies and generate context-sensitive replies.
  • EthicMind is designed to improve alignment behavior without requiring additional model training, by adapting decisions during inference across turns.
  • The authors also develop a risk-stratified, multi-turn evaluation protocol with a context-aware user simulation to test behavior in high-risk and morally ambiguous situations.
  • Experiments indicate EthicMind delivers more consistent ethical guidance and emotional engagement than baseline methods, especially under high ethical complexity.

Abstract

Intelligent dialogue systems are increasingly deployed in emotionally and ethically sensitive settings, where failures in either emotional attunement or ethical judgment can cause significant harm. Existing dialogue models typically address empathy and ethical safety in isolation, and often fail to adapt their behavior as ethical risk and user emotion evolve across multi-turn interactions. We formulate ethical-emotional alignment in dialogue as an explicit turn-level decision problem, and propose \textsc{EthicMind}, a risk-aware framework that implements this formulation in multi-turn dialogue at inference time. At each turn, \textsc{EthicMind} jointly analyzes ethical risk signals and user emotion, plans a high-level response strategy, and generates context-sensitive replies that balance ethical guidance with emotional engagement, without requiring additional model training. To evaluate alignment behavior under ethically complex interactions, we introduce a risk-stratified, multi-turn evaluation protocol with a context-aware user simulation procedure. Experimental results show that \textsc{EthicMind} achieves more consistent ethical guidance and emotional engagement than competitive baselines, particularly in high-risk and morally ambiguous scenarios.