AI Safety Training Can be Clinically Harmful

arXiv cs.CL / 4/28/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that scaling LLM-based mental health support is risky because only a small fraction of such interventions have been clinically tested, and simulations show psychological deterioration in a large share of cases.
  • In evaluations across Prolonged Exposure and CBT scenarios (including severity-escalated variants), the models showed high scores for surface-level acknowledgment but major failures in therapeutic appropriateness and protocol fidelity, including zero fidelity for two models.
  • The study identifies a systematic failure mode where RLHF-style safety alignment can disrupt the intended therapeutic mechanism—by giving false reassurance, misplacing crisis resources, refusing to challenge harmful cognitions, or abandoning tasks during CBT.
  • The authors propose a five-axis evaluation framework covering protocol fidelity, hallucination risk, behavioral consistency, crisis safety, and demographic robustness, and map it to FDA SaMD and the EU AI Act requirements.
  • They conclude that no AI mental health system should move to deployment without passing multi-axis evaluation across all five dimensions, emphasizing the need for rigorous safety and efficacy checks beyond general alignment.

Abstract

Large language models are being deployed as mental health support agents at scale, yet only 16% of LLM-based chatbot interventions have undergone rigorous clinical efficacy testing, and simulations reveal psychological deterioration in over one-third of cases. We evaluate four generative models on 250 Prolonged Exposure (PE) therapy scenarios and 146 CBT cognitive restructuring exercises (plus 29 severity-escalated variants), scored by a three-judge LLM panel. All models scored near-perfectly on surface acknowledgment (~0.91-1.00) while therapeutic appropriateness collapsed to 0.22-0.33 at the highest severity for three of four models, with protocol fidelity reaching zero for two. Under CBT severity escalation, one model's task completeness dropped from 92% to 71% while the frontier model's safety-interference score fell from 0.99 to 0.61. We identify a systematic, modality-spanning failure: RLHF safety alignment disrupts the therapeutic mechanism of action by grounding patients during imaginal exposure, offering false reassurance, inserting crisis resources into controlled exercises, and refusing to challenge distorted cognitions mentioning self-harm in PE; and through task abandonment or safety-preamble insertion during CBT cognitive restructuring. These findings motivate a five-axis evaluation framework (protocol fidelity, hallucination risk, behavioral consistency, crisis safety, demographic robustness), mapped onto FDA SaMD and EU AI Act requirements. We argue that no AI mental health system should proceed to deployment without passing multi-axis evaluation across all five dimensions.