Mitigating Extrinsic Gender Bias for Bangla Classification Tasks

arXiv cs.CL / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies extrinsic gender bias in Bangla pretrained language models, focusing on low-resource language settings that have received limited attention.
  • It introduces four manually annotated benchmark datasets for sentiment, toxicity, hate speech, and sarcasm classification, then augments them with controlled gender perturbations for minimal-pair bias evaluation.
  • The authors propose RandSymKL, a randomized debiasing training method that combines symmetric KL divergence with cross-entropy loss to mitigate gender-driven prediction shifts across multiple task-specific models.
  • Experiments compare RandSymKL to existing debiasing approaches, reporting reduced bias while preserving competitive classification accuracy.
  • The study releases the implementation and the constructed datasets publicly to enable follow-on research on Bangla gender-bias mitigation.

Abstract

In this study, we investigate extrinsic gender bias in Bangla pretrained language models, a largely underexplored area in low-resource languages. To assess this bias, we construct four manually annotated, task-specific benchmark datasets for sentiment analysis, toxicity detection, hate speech detection, and sarcasm detection. Each dataset is augmented using nuanced gender perturbations, where we systematically swap gendered names and terms while preserving semantic content, enabling minimal-pair evaluation of gender-driven prediction shifts. We then propose RandSymKL, a randomized debiasing strategy integrated with symmetric KL divergence and cross-entropy loss to mitigate the bias across task-specific pretrained models. RandSymKL is a refined training approach to integrate these elements in a unified way for extrinsic gender bias mitigation focused on classification tasks. Our approach was evaluated against existing bias mitigation methods, with results showing that our technique not only effectively reduces bias but also maintains competitive accuracy compared to other baseline approaches. To promote further research, we have made both our implementation and datasets publicly available: https://github.com/sajib-kumar/Mitigating-Bangla-Extrinsic-Gender-Bias