AI Navigate

Hope Speech Detection in code-mixed Roman Urdu tweets: A Positive Turn in Natural Language Processing

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses hope speech detection in code-mixed Roman Urdu tweets, filling a gap in inclusive NLP for low-resource informal language varieties.
  • It introduces the first multi-class annotated dataset for Roman Urdu hope speech with categories Generalized Hope, Realistic Hope, Unrealistic Hope, and Not Hope.
  • It proposes a custom attention-based transformer model optimized for the syntactic and semantic variability of Roman Urdu, evaluated with 5-fold cross-validation.
  • It reports that XLM-R achieves the best cross-validation score of 0.78, outperforming a baseline SVM (0.75) and BiLSTM (0.76) with gains of 4% and 2.63% respectively.
  • It analyzes the psychological foundations of hope and linguistic patterns to inform dataset development and validates the results with a statistical t-test.

Abstract

Hope is a positive emotional state involving the expectation of favorable future outcomes, while hope speech refers to communication that promotes optimism, resilience, and support, particularly in adverse contexts. Although hope speech detection has gained attention in Natural Language Processing (NLP), existing research mainly focuses on high-resource languages and standardized scripts, often overlooking informal and underrepresented forms such as Roman Urdu. To the best of our knowledge, this is the first study to address hope speech detection in code-mixed Roman Urdu by introducing a carefully annotated dataset, thereby filling a critical gap in inclusive NLP research for low-resource, informal language varieties. This study makes four key contributions: (1) it introduces the first multi-class annotated dataset for Roman Urdu hope speech, comprising Generalized Hope, Realistic Hope, Unrealistic Hope, and Not Hope categories; (2) it explores the psychological foundations of hope and analyzes its linguistic patterns in code-mixed Roman Urdu to inform dataset development; (3) it proposes a custom attention-based transformer model optimized for the syntactic and semantic variability of Roman Urdu, evaluated using 5-fold cross-validation; and (4) it verifies the statistical significance of performance gains using a t-test. The proposed model, XLM-R, achieves the best performance with a cross-validation score of 0.78, outperforming the baseline SVM (0.75) and BiLSTM (0.76), with gains of 4% and 2.63% respectively.