Character Beyond Speech: Leveraging Role-Playing Evaluation in Audio Large Language Models via Reinforcement Learning

arXiv cs.LG / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes RoleJudge, an evaluation framework that uses audio large language models to assess how well role-playing agents align character traits across both speech and other modalities.
  • It introduces RoleChat, a voice role-playing evaluation dataset that includes authentic and LLM-generated speech plus chain-of-thought reasoning annotations.
  • The authors apply a multi-stage training approach and use reinforcement learning with “Standard Alignment” to reduce reward misalignment during optimization of role-playing behavior.
  • Experiments report improved accuracy and better subjective assessments versus baseline models, supporting the value of multidimensional character evaluation for audio LLM role-play.
  • The work targets a key challenge in character alignment: vocal paralinguistic cues are difficult to quantify and traditional text-only evaluation does not capture them.

Abstract

The rapid evolution of multimodal large models has revolutionized the simulation of diverse characters in speech dialogue systems, enabling a novel interactive paradigm. Character attributes are manifested not only in textual responses but also through vocal features, as speech conveys rich paralinguistic information that is challenging to quantify. This poses significant difficulties in evaluating the character alignment of role-playing agents. To address these challenges, we present RoleJudge, an evaluation framework that leverages audio large language models to systematically assess the alignment between speech and character across multiple modalities and dimensions. Furthermore, we introduce RoleChat, the first voice role-playing evaluation dataset enriched with chain-of-thought reasoning annotations, comprising a diverse set of authentic and LLM-generated speech samples. Utilizing this dataset, we implement a multi-stage training paradigm and incorporate Standard Alignment in reinforcement learning to mitigate reward misalignment during optimization. Experimental results in terms of accuracy and subjective assessment demonstrate that RoleJudge outperforms various baseline models, validating the effectiveness of our multidimensional evaluation framework.