AI Navigate

Developing an English-Efik Corpus and Machine Translation System for Digitization Inclusion

arXiv cs.CL / 3/17/2026

📰 NewsModels & Research

Key Points

  • The study targets English-Efik translation for a low-resource language using a small parallel corpus of 13,865 sentence pairs.
  • It compares fine-tuning of two multilingual MT models, mT5 and NLLB200, with NLLB-200 achieving BLEU scores of 26.64 (English→Efik) and 31.21 (Efik→English) and chrF scores of 51.04 and 47.92.
  • The results demonstrate the feasibility of practical MT tools for low-resource languages and stress inclusive data practices and culturally grounded evaluation for equitable NLP.
  • The work highlights digitization inclusion and provides a path for broader representation of underrepresented languages in NLP research.

Abstract

Low-resource languages serve as invaluable repositories of human history, preserving cultural and intellectual diversity. Despite their significance, they remain largely absent from modern natural language processing systems. While progress has been made for widely spoken African languages such as Swahili, Yoruba, and Amharic, smaller indigenous languages like Efik continue to be underrepresented in machine translation research. This study evaluates the effectiveness of state-of-the-art multilingual neural machine translation models for English-Efik translation, leveraging a small-scale, community-curated parallel corpus of 13,865 sentence pairs. We fine-tuned both the mT5 multilingual model and the NLLB200 model on this dataset. NLLB-200 outperformed mT5, achieving BLEU scores of 26.64 for English-Efik and 31.21 for Efik-English, with corresponding chrF scores of 51.04 and 47.92, indicating improved fluency and semantic fidelity. Our findings demonstrate the feasibility of developing practical machine translation tools for low-resource languages and highlight the importance of inclusive data practices and culturally grounded evaluation in advancing equitable NLP.