Synthetic Users, Real Differences: an Evaluation Framework for User Simulation in Multi-Turn Conversations

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that user simulation can be a practical alternative to collecting and scoring real chatbot interactions, but it must be realistic to reflect real user–bot dialogue patterns.
  • It introduces realsim, a new evaluation framework that compares real versus simulated multi-turn dialogues across eight dimensions spanning communicative function, user state, and the surface form of user messages.
  • The framework is instantiated using a curated dataset of 1,000 real, task-focused multi-turn user–chatbot dialogues across 16 application domains.
  • The authors find that simulated users often fail to reproduce communication “frictions” that real users create, potentially making simulation-based evaluations too optimistic.
  • The results also vary by domain, suggesting that domain-specific user simulators may be necessary rather than relying on a single general-purpose simulator.

Abstract

There is growing interest in exploring user simulation as an alternative to gathering and scoring real user-chatbot interactions for AI chatbot evaluation. For this purpose, it is important to ensure the realism of the simulation, i.e., the extent to which simulated dialogues reflect real dialogues users have with chatbots. Most existing methods evaluating simulation realism produce coarse quality signal and remain solely at the level of individual dialogues. To support more rigorous evaluation in this area, we propose realsim, an evaluation framework that enables practitioners to take a distributional view of real vs. simulated dialogues along 8 dimensions, covering attributes related to the communicative functions of the interaction, user states, and the surface form of user messages. We then instantiate the framework with a curated dataset of 1K multi-turn task-focused real user-chatbot dialogues that cover 16 domains of chatbot applications. Overall, we find that simulated users tend to struggle at capturing communication frictions that real users introduce to interactions, which could make evaluations based on such simulations overly optimistic. We also observe variability in performance across different domains, which may indicate a need for domain-specific user simulators.