Cognitive Policy-Driven LLM for Diagnosis and Intervention of Cognitive Distortions in Emotional Support Conversation
arXiv cs.CL / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces an LLM approach for Emotional Support Conversation (ESC) that explicitly targets cognitive distortions in help-seekers’ statements, going beyond surface-level emotional comfort.
- It presents the CogBiasESC dataset, adding labeled information on cognitive distortion types, intensity, and safe risk levels to extend existing ESC datasets.
- The authors propose CoPoLLM (Cognitive Policy-driven Large Language Model), a framework designed to diagnose cognitive distortions and generate more effective intervention strategies.
- Experiments report that CoPoLLM outperforms 15 state-of-the-art baselines on diagnostic accuracy, intervention effectiveness, and safety risk control.
- A theoretical analysis is provided to argue for CoPoLLM’s safety advantages in this setting.
Related Articles

Rethinking Coding Education for the AI Era
Dev.to

We Shipped an MVP With Vibe-Coding. Here's What Nobody Tells You About the Aftermath
Dev.to

Agent Package Manager (APM): A DevOps Guide to Reproducible AI Agents
Dev.to

3 Things I Learned Benchmarking Claude, GPT-4o, and Gemini on Real Dev Work
Dev.to

Open Source Contributors Needed for Skillware & Rooms (AI/ML/Python)
Dev.to