AI Navigate

NepTam: A Nepali-Tamang Parallel Corpus and Baseline Machine Translation Experiments

arXiv cs.CL / 3/17/2026

📰 NewsModels & Research

Key Points

  • NepTam20K provides a 20,000-sentence gold-standard Nepali-Tamang parallel corpus and NepTam80K provides an 80,000-sentence synthetic parallel corpus, both designed to support machine translation.
  • The datasets are sentence-aligned and built through a pipeline including data scraping from Nepali news and online sources, preprocessing, semantic filtering, tense/polarity balancing (for NepTam20K), and expert translation with verification by native Tamang linguists.
  • The corpus covers five domains: Agriculture, Health, Education and Technology, Culture, and General Communication.
  • Baseline translation experiments using multilingual models such as mBART, M2M-100, NLLB-200, and a vanilla Transformer show that fine-tuning NLLB-200 achieves the highest sacreBLEU scores of 40.92 (Nepali-Tamang) and 45.26 (Tamang-Nepali).

Abstract

Modern Translation Systems heavily rely on high-quality, large parallel datasets for state-of-the-art performance. However, such resources are largely unavailable for most of the South Asian languages. Among them, Nepali and Tamang fall into such category, with Tamang being among the least digitally resourced languages in the region. This work addresses the gap by developing NepTam20K, a 20K gold standard parallel corpus, and NepTam80K, an 80K synthetic Nepali-Tamang parallel corpus, both sentence-aligned and designed to support machine translation. The datasets were created through a pipeline involving data scraping from Nepali news and online sources, pre-processing, semantic filtering, balancing for tense and polarity (in NepTam20K dataset), expert translation into Tamang by native speakers of the language, and verification by an expert Tamang linguist. The dataset covers five domains: Agriculture, Health, Education and Technology, Culture, and General Communication. To evaluate the dataset, baseline machine translation experiments were carried out using various multilingual pre-trained models: mBART, M2M-100, NLLB-200, and a vanilla Transformer model. The fine-tuning on the NLLB-200 achieved the highest sacreBLEU scores of 40.92 (Nepali-Tamang) and 45.26 (Tamang-Nepali).