AI Navigate

TherapyGym: Evaluating and Aligning Clinical Fidelity and Safety in Therapy Chatbots

arXiv cs.AI / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • THERAPYGYM introduces a framework to evaluate and improve therapy chatbots along fidelity to evidence-based CBT techniques and safety, using an automated CTRS pipeline and a multi-label safety annotation scheme.
  • It releases THERAPYJUDGEBENCH, a validation set with 116 dialogues and 1,270 expert ratings to audit and calibrate judgments against licensed clinicians, addressing biases in LLM-based judging.
  • The framework can drive safe RL by using CTRS and safety-based rewards with configurable patient simulations across diverse symptom profiles.
  • Empirical results show models trained with THERAPYGYM improve clinical fidelity, with CTRS scores rising from 0.10 to 0.60 (and 0.16 to 0.59 under LLM judges).
  • Overall, the work supports scalable development of therapy chatbots that are faithful to evidence-based practice and safer in high-stakes mental-health settings.

Abstract

Large language models (LLMs) are increasingly used for mental-health support; yet prevailing evaluation methods--fluency metrics, preference tests, and generic dialogue benchmarks--fail to capture the clinically critical dimensions of psychotherapy. We introduce THERAPYGYM, a framework that evaluates and improves therapy chatbots along two clinical pillars: fidelity and safety. Fidelity is measured using the Cognitive Therapy Rating Scale (CTRS), implemented as an automated pipeline that scores adherence to CBT techniques over multi-turn sessions. Safety is assessed using a multi-label annotation scheme, covering therapy-specific risks (e.g., failing to address harm or abuse). To mitigate bias and unreliability in LLM-based judges, we further release THERAPYJUDGEBENCH, a validation set of 116 dialogues with 1,270 expert ratings for auditing and calibration against licensed clinicians. THERAPYGYM also serves as a training harness: CTRS and safety-based rewards drive RL with configurable patient simulations spanning diverse symptom profiles. Models trained in THERAPYGYM improve on expert ratings, with average CTRS rising from 0.10 to 0.60 (and 0.16 to 0.59 under LLM judges). Our work enables scalable development of therapy chatbots that are faithful to evidence-based practice and safer in high-stakes use.