AI Navigate

EmoLLM: Appraisal-Grounded Cognitive-Emotional Co-Reasoning in Large Language Models

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • EmoLLM introduces an appraisal-grounded framework for simultaneous cognitive (IQ) and emotional (EQ) co-reasoning in dialogue to improve both reliability and emotional appropriateness.
  • It uses an explicit Appraisal Reasoning Graph (ARG) to structure intermediate reasoning over contextual facts, inferred user needs, appraisal dimensions, emotional states, and response strategies before generating a reply.
  • The model is trained in a multi-turn role-play environment with reinforcement learning, using reverse-perspective reasoning to provide reward signals based on predicted user-side consequences.
  • Across diverse dialogue settings, EmoLLM improves emotional state outcomes and response quality while maintaining strong factual reliability compared with strong baselines.
  • This approach targets real-world interactions such as emotional support, technical assistance, and consultation, where emotional intelligence is crucial.

Abstract

Large language models (LLMs) demonstrate strong cognitive intelligence (IQ), yet many real-world interactions also require emotional intelligence (EQ) to produce responses that are both factually reliable and emotionally appropriate. In settings such as emotional support, technical assistance, and consultation, effective dialogue depends on how situations are appraised with respect to the user's needs, goals, and coping capacity. Inspired by appraisal theory, we propose EmoLLM, an appraisal-grounded framework for IQ/EQ co-reasoning in dialogue. EmoLLM uses an explicit Appraisal Reasoning Graph (ARG) to structure intermediate reasoning over contextual facts, inferred user needs, appraisal dimensions, emotional states, and response strategies before generating a reply. We train EmoLLM in a multi-turn role-play environment with reinforcement learning, where reverse-perspective reasoning provides reward signals based on predicted user-side consequences of responses. Across diverse dialogue settings, EmoLLM improves emotional state outcomes and response quality over strong baselines while preserving strong factual reliability.