AI Safety Training Can be Clinically Harmful
arXiv cs.CL / 4/28/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that scaling LLM-based mental health support is risky because only a small fraction of such interventions have been clinically tested, and simulations show psychological deterioration in a large share of cases.
- In evaluations across Prolonged Exposure and CBT scenarios (including severity-escalated variants), the models showed high scores for surface-level acknowledgment but major failures in therapeutic appropriateness and protocol fidelity, including zero fidelity for two models.
- The study identifies a systematic failure mode where RLHF-style safety alignment can disrupt the intended therapeutic mechanism—by giving false reassurance, misplacing crisis resources, refusing to challenge harmful cognitions, or abandoning tasks during CBT.
- The authors propose a five-axis evaluation framework covering protocol fidelity, hallucination risk, behavioral consistency, crisis safety, and demographic robustness, and map it to FDA SaMD and the EU AI Act requirements.
- They conclude that no AI mental health system should move to deployment without passing multi-axis evaluation across all five dimensions, emphasizing the need for rigorous safety and efficacy checks beyond general alignment.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
Free Registration & $20K Prize Pool: 2nd MLC-SLM Challenge 2026 on Multilingual Speech LLMs [N]
Reddit r/MachineLearning
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to