CMHL: Contrastive Multi-Head Learning for Emotionally Consistent Text Classification
arXiv cs.CL / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- CMHL is a single-model architecture that explicitly models the emotional structure through multi-task learning (predicting primary emotions, valence, and intensity), psychologically-grounded supervision from Russell's circumplex model, and a novel contrastive contradiction loss that enforces emotional consistency.
- With 125M parameters, CMHL outperforms 56x larger LLMs and ensembles, achieving a new state-of-the-art F1 score of 93.75% on the dair-ai Emotion dataset.
- The approach demonstrates cross-domain generalization, outperforming domain-specific models on SWMH (Reddit Suicide Watch and Mental Health Collection) with F1 around 72.50% and recall around 73.30%, indicating enhanced sensitivity to mental health distress.
- The work argues that architectural intelligence and embedding psychological priors, rather than sheer parameter count, drive progress in emotion classification, offering an efficient, interpretable, and clinically relevant paradigm for affective computing.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA