Multi-View Attention Multiple-Instance Learning Enhanced by LLM Reasoning for Cognitive Distortion Detection
arXiv cs.CL / 4/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a cognitive distortion detection framework that combines Large Language Model (LLM) reasoning with Multiple-Instance Learning (MIL) to better handle contextual ambiguity and semantic overlap.
- Each utterance is decomposed into Emotion, Logic, and Behavior (ELB) components, which the LLM uses to infer multiple distortion instances with predicted types, expressions, and LLM-assigned salience scores.
- A Multi-View Gated Attention mechanism integrates these LLM-inferred instances to produce the final classification output.
- Experiments on the KoACD (Korean) and Therapist QA (English) datasets show that adding ELB features and LLM-derived salience scores improves performance, particularly for distortions that are difficult to interpret.
- The authors state that the dataset and implementation details are publicly available, supporting reproducibility and further research in mental-health NLP.
Related Articles
Awesome Open-Weight Models: The Practitioner's Guide to Open-Source LLMs (2026 Edition) [P]
Reddit r/MachineLearning

The Mythos vs GPT-5.4-Cyber debate is missing the benchmark
Dev.to

Beyond the Crop: Automating "Ghost Mannequin" Effects with Depth-Aware Inpainting
Dev.to

The $20/month AI subscription is gaslighting developers in emerging markets
Dev.to

A Claude Code hook that warns you before calling a low-trust MCP server
Dev.to