CMHL: Contrastive Multi-Head Learning for Emotionally Consistent Text Classification
arXiv cs.CL / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- CMHL is a single-model architecture that explicitly models the emotional structure through multi-task learning (predicting primary emotions, valence, and intensity), psychologically-grounded supervision from Russell's circumplex model, and a novel contrastive contradiction loss that enforces emotional consistency.
- With 125M parameters, CMHL outperforms 56x larger LLMs and ensembles, achieving a new state-of-the-art F1 score of 93.75% on the dair-ai Emotion dataset.
- The approach demonstrates cross-domain generalization, outperforming domain-specific models on SWMH (Reddit Suicide Watch and Mental Health Collection) with F1 around 72.50% and recall around 73.30%, indicating enhanced sensitivity to mental health distress.
- The work argues that architectural intelligence and embedding psychological priors, rather than sheer parameter count, drive progress in emotion classification, offering an efficient, interpretable, and clinically relevant paradigm for affective computing.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
A supervisor or "manager" Al agent is the wrong way to control Al
Reddit r/artificial
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA