AI Navigate

AraModernBERT: Transtokenized Initialization and Long-Context Encoder Modeling for Arabic

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • AraModernBERT is presented as an Arabic adaptation of the ModernBERT encoder architecture.
  • The work shows that transtokenized embedding initialization and native long-context modeling up to 8,192 tokens significantly improve Arabic language modeling.
  • It demonstrates that AraModernBERT supports stable and effective long-context modeling with improved intrinsic language modeling performance at extended sequence lengths.
  • Downstream evaluations on Arabic NLP tasks, including inference, offensive language detection, question-question similarity, and named entity recognition, confirm strong transfer to discriminative and sequence labeling settings.

Abstract

Encoder-only transformer models remain widely used for discriminative NLP tasks, yet recent architectural advances have largely focused on English. In this work, we present AraModernBERT, an adaptation of the ModernBERT encoder architecture to Arabic, and study the impact of transtokenized embedding initialization and native long-context modeling up to 8,192 tokens. We show that transtokenization is essential for Arabic language modeling, yielding dramatic improvements in masked language modeling performance compared to non-transtokenized initialization. We further demonstrate that AraModernBERT supports stable and effective long-context modeling, achieving improved intrinsic language modeling performance at extended sequence lengths. Downstream evaluations on Arabic natural language understanding tasks, including inference, offensive language detection, question-question similarity, and named entity recognition, confirm strong transfer to discriminative and sequence labeling settings. Our results highlight practical considerations for adapting modern encoder architectures to Arabic and other languages written in Arabic-derived scripts.