Dual Perspectives in Emotion Attribution: A Generator-Interpreter Framework for Cross-Cultural Analysis of Emotion in LLMs

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that emotion attribution in LLMs should account for both the cultural background of emotion expression (generator) and the cultural context of interpretation (interpreter), rather than assuming universality.
  • It introduces a Generator-Interpreter framework and evaluates six LLMs on emotion attribution using data spanning 15 countries.
  • Results show that LLM performance differences vary by emotion type and cultural context, indicating that cross-cultural emotion modeling is not uniform across settings.
  • The study finds generator–interpreter alignment effects, with the emotion generator’s country of origin having a stronger influence on performance than other factors.
  • The authors call for culturally sensitive emotion modeling to improve robustness and fairness in LLM-based emotion understanding systems deployed globally.

Abstract

Large language models (LLMs) are increasingly used in cross-cultural systems to understand and adapt to human emotions, which are shaped by cultural norms of expression and interpretation. However, prior work on emotion attribution has focused mainly on interpretation, overlooking the cultural background of emotion generators. This assumption of universality neglects variation in how emotions are expressed and perceived across nations. To address this gap, we propose a Generator-Interpreter framework that captures dual perspectives of emotion attribution by considering both expression and interpretation. We systematically evaluate six LLMs on an emotion attribution task using data from 15 countries. Our analysis reveals that performance variations depend on the emotion type and cultural context. Generator-interpreter alignment effects are present; the generator's country of origin has a stronger impact on performance. We call for culturally sensitive emotion modeling in LLM-based systems to improve robustness and fairness in emotion understanding across diverse cultural contexts.