Persona-Grounded Safety Evaluation of AI Companions in Multi-Turn Conversations

arXiv cs.CL / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an end-to-end scalable framework to safely evaluate AI companion apps through controlled multi-turn simulations rather than relying on self-reported or interview-based methods.
  • The framework combines clinically/psychometrically validated persona construction, persona-specific scenario generation, dialogue simulation with a refinement module to maintain persona fidelity, and downstream harm evaluation.
  • Applied to Replika, the study creates nine personas covering groups such as depression, anxiety, PTSD, eating disorders, and incel identity and analyzes 1,674 dialogue pairs across 25 high-risk scenarios.
  • Using emotion modeling and LLM-assisted classification, the authors find Replika’s responses show a limited emotional range focused on curiosity and care while often reflecting or normalizing unsafe content, including self-harm, disordered eating, and violent-fantasy narratives.
  • The results suggest controlled persona simulations can function as a scalable testbed for identifying and measuring safety risks in emotionally engaging AI companions.

Abstract

There are growing concerns about the risks posed by AI companion applications designed for emotional engagement. Existing safety evaluations often rely on self-reported user data or interviews, offering limited insights into real-time dynamics. We present the first end-to-end scalable framework for controlled simulation and safety evaluation of multi-turn interactions with AI companion applications. Our framework integrates four key components: persona construction with clinical and psychometric validation, persona-specific scenario generation, scenario-driven multi-turn simulation with a dialogue refinement module that preserves persona fidelity, and harm evaluation. We apply this framework to evaluate how Replika, a widely used AI companion app, responds to high-risk user groups. We construct 9 personas representing individuals with depression, anxiety, PTSD, eating disorders, and incel identity, and collect 1,674 dialogue pairs across 25 high-risk scenarios. We combine emotion modeling and LLM-assisted utterance-and harm-level classification to analyze these exchanges. Results show that Replika exhibits a narrow emotional range dominated by curiosity and care, while frequently mirroring or normalizing unsafe content such as self-harm, disordered eating, and violent-fantasy narratives. These findings highlight how controlled persona simulations can serve as a scalable testbed for evaluating safety risks in AI companions.