Adapting TrOCR for Printed Tigrinya Text Recognition: Word-Aware Loss Weighting for Cross-Script Transfer Learning

arXiv cs.CV / 4/23/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper presents the first adaptation of TrOCR, a Transformer-based OCR model, to recognize printed Tigrinya using the Ge'ez (Ethiopic) script.
  • It extends a pre-trained TrOCR tokenizer by adding a byte-level BPE vocabulary covering 230 Ge'ez characters, but notes that the unmodified model yields unusable results on Ge'ez text.
  • To fix systematic word-boundary errors caused by Latin-centric tokenization conventions, the authors introduce a Word-Aware Loss Weighting method.
  • After adaptation, the TrOCR-Printed model reaches 0.22% Character Error Rate (CER) and 97.20% exact match accuracy on 5,000 synthetic test images from the GLOCR dataset.
  • An ablation study shows Word-Aware Loss Weighting is the key improvement, cutting CER by two orders of magnitude beyond vocabulary extension alone, and the full training pipeline runs in under three hours on a single 8 GB consumer GPU with public releases.

Abstract

Transformer-based OCR models have shown strong performance on Latin and CJK scripts, but their application to African syllabic writing systems remains limited. We present the first adaptation of TrOCR for printed Tigrinya using the Ge'ez script. Starting from a pre-trained model, we extend the byte-level BPE tokenizer to cover 230 Ge'ez characters and introduce Word-Aware Loss Weighting to resolve systematic word-boundary failures that arise when applying Latin-centric BPE conventions to a new script. The unmodified model produces no usable output on Ge'ez text. After adaptation, the TrOCR-Printed variant achieves 0.22% Character Error Rate and 97.20% exact match accuracy on a held-out test set of 5,000 synthetic images from the GLOCR dataset. An ablation study confirms that Word-Aware Loss Weighting is the critical component, reducing CER by two orders of magnitude compared to vocabulary extension alone. The full pipeline trains in under three hours on a single 8 GB consumer GPU. All code, model weights, and evaluation scripts are publicly released.