AI Navigate

On the Emotion Understanding of Synthesized Speech

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study systematically evaluates Speech Emotion Recognition (SER) on synthesized speech across multiple datasets, discriminative and generative SER models, and diverse synthesis models to test whether emotion understanding transfers to synthesized speech.
  • They find that current SER models do not generalize to synthesized speech due to a representation mismatch caused by speech token prediction during synthesis.
  • Generative Speech Language Models tend to infer emotion from textual semantics rather than relying on paralinguistic cues.
  • The results indicate that existing SER models often exploit non-robust shortcuts and that robust paralinguistic understanding in SLMs remains challenging, with implications for using SER as a metric in speech synthesis.

Abstract

Emotion is a core paralinguistic feature in voice interaction. It is widely believed that emotion understanding models learn fundamental representations that transfer to synthesized speech, making emotion understanding results a plausible reward or evaluation metric for assessing emotional expressiveness in speech synthesis. In this work, we critically examine this assumption by systematically evaluating Speech Emotion Recognition (SER) on synthesized speech across datasets, discriminative and generative SER models, and diverse synthesis models. We find that current SER models can not generalize to synthesized speech, largely because speech token prediction during synthesis induces a representation mismatch between synthesized and human speech. Moreover, generative Speech Language Models (SLMs) tend to infer emotion from textual semantics while ignoring paralinguistic cues. Overall, our findings suggest that existing SER models often exploit non-robust shortcuts rather than capturing fundamental features, and paralinguistic understanding in SLMs remains challenging.