ViGoEmotions: A Benchmark Dataset For Fine-grained Emotion Detection on Vietnamese Texts

arXiv cs.CL / 3/27/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ViGoEmotions, a Vietnamese social media dataset with 20,664 comments annotated with 27 fine-grained emotion labels for emotion detection research.
  • Eight pre-trained Transformer-based models are benchmarked using three emoji handling/preprocessing strategies: preserving original emojis, converting emojis to text, and applying ViSoLex lexical normalization.
  • Experimental results indicate that converting emojis into textual descriptions improves several BERT-based baselines, while preserving emojis tends to work best for ViSoBERT and CafeBERT.
  • Removing emojis generally reduces model performance, underscoring the importance of emoji information for fine-grained emotion classification.
  • ViSoBERT achieves the top results with a Macro F1 of 61.50% and Weighted F1 of 63.26%, demonstrating the dataset’s utility for multiple architectures while emphasizing preprocessing and annotation quality as key determinants.

Abstract

Emotion classification plays a significant role in emotion prediction and harmful content detection. Recent advancements in NLP, particularly through large language models (LLMs), have greatly improved outcomes in this field. This study introduces ViGoEmotions -- a Vietnamese emotion corpus comprising 20,664 social media comments in which each comment is classified into 27 fine-grained distinct emotions. To evaluate the quality of the dataset and its impact on emotion classification, eight pre-trained Transformer-based models were evaluated under three preprocessing strategies: preserving original emojis with rule-based normalization, converting emojis into textual descriptions, and applying ViSoLex, a model-based lexical normalization system. Results show that converting emojis into text often improves the performance of several BERT-based baselines, while preserving emojis yields the best results for ViSoBERT and CafeBERT. In contrast, removing emojis generally leads to lower performance. ViSoBERT achieved the highest Macro F1-score of 61.50% and Weighted F1-score of 63.26%. Strong performance was also observed from CafeBERT and PhoBERT. These findings highlight that while the proposed corpus can support diverse architectures effectively, preprocessing strategies and annotation quality remain key factors influencing downstream performance.