Creating and Evaluating Figurative Language Dataset for Sindhi

arXiv cs.CL / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces SiNFluD, a new benchmark dataset specifically designed for Sindhi figurative language classification.
  • The dataset is built by collecting raw Sindhi text from blogs, social media, and literary sources, then preparing it for human annotation.
  • Two native annotators label the data using Doccano, reaching an inter-annotator agreement of 0.81.
  • Baseline experiments are reported using 5-fold and 10-fold cross-validation, and the study evaluates mBERT, XLM-RoBERTa, XLM-RoBERTa-XL, and SetFit for few-shot fine-tuning.
  • The results show that the pretrained XLM-RoBERTa-XL model delivers the best overall performance on the benchmark.

Abstract

In this article, we introduce SiNFluD, a novel benchmark dataset for Sindhi figurative language classification. We first collect raw text from various blogs, social media platforms, and literary sources, and subsequently prepare the corpus for annotation. Two native annotators label the data using the Doccano text annotation tool, achieving an inter-annotator agreement of 0.81. We then establish baseline results using 5-fold and 10-fold cross-validation. Finally, we evaluate mBERT, XLM-RoBERTa, and XLM-RoBERTa-XL models, along with SetFit for few-shot fine-tuning of sentence transformers. Among these, the pretrained XLM-RoBERTa-XL achieves the best performance.