MUSE: Multi-Domain Chinese User Simulation via Self-Evolving Profiles and Rubric-Guided Alignment

arXiv cs.CL / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MUSE, a multi-domain Chinese user simulation framework aimed at producing human-like, controllable, and persona-consistent responses across long interactions.
  • It proposes Iterative Profile Self-Evolution (IPSE) to optimize simulated user profiles by comparing discrepancies between simulated and real dialogue trajectories.
  • It improves response realism via Role-Reversal Supervised Fine-Tuning and enhances long-horizon alignment using a rubric-based reward model coupled with rubric-guided multi-turn reinforcement learning.
  • Experiments reportedly show MUSE outperforms strong baselines on both utterance-level and session-level evaluations, with better realism, coherence, and persona consistency over extended dialogues.

Abstract

User simulators are essential for the scalable training and evaluation of interactive AI systems. However, existing approaches often rely on shallow user profiling, struggle to maintain persona consistency over long interactions, and are largely limited to English or single-domain settings. We present MUSE, a multi-domain Chinese user simulation framework designed to generate human-like, controllable, and behaviorally consistent responses. First, we propose Iterative Profile Self-Evolution (IPSE), which gradually optimizes user profiles by comparing and reasoning discrepancies between simulated trajectories and real dialogue behaviors. We then apply Role-Reversal Supervised Fine-Tuning to improve local response realism and human-like expression. To enable fine-grained behavioral alignment, we further train a specialized rubric-based reward model and incorporate it into rubric-guided multi-turn reinforcement learning, which optimizes the simulator at the dialogue level and enhances long-horizon behavioral consistency. Experiments show that MUSE consistently outperforms strong baselines in both utterance-level and session-level evaluations, generating responses that are more realistic, coherent, and persona-consistent over extended interactions.