Dynamic Adaptive Attention and Supervised Contrastive Learning: A Novel Hybrid Framework for Text Sentiment Classification
arXiv cs.CL / 4/14/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a hybrid sentiment-classification framework built on a BERT-based Transformer encoder that combines dynamic adaptive multi-head attention with supervised contrastive learning.
- The dynamic adaptive attention uses a global context pooling vector to dynamically weight each attention head, improving focus on sentiment-critical tokens and reducing noise from irrelevant parts of long reviews.
- The supervised contrastive learning branch reshapes the embedding space by encouraging tighter intra-class clustering and stronger inter-class separation.
- Experiments on the IMDB dataset report 94.67% accuracy, exceeding prior strong baselines by 1.5–2.5 percentage points, while claiming the approach is lightweight and extensible to other text classification tasks.
- Overall, the work targets common weaknesses of standard BERT/recurrent models in capturing long-range dependencies and handling ambiguous emotional expressions in lengthy texts.
