From FusHa to Folk: Exploring Cross-Lingual Transfer in Arabic Language Models

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study examines how Arabic language models pretrained mainly on Modern Standard Arabic (MSA) transfer to different Arabic dialects used in speech and online writing.
  • Using probing across three NLP tasks and representational similarity analysis, the authors find that cross-dialect transfer is possible but varies significantly between dialects.
  • The paper reports that dialect similarity to MSA is partially explained by geographic proximity among dialect regions.
  • It also provides evidence of negative interference when models are trained to support all Arabic dialects simultaneously, suggesting added training can reduce effective transfer for some dialects.
  • The findings raise concerns about how well “all-dialect” training strategies support cross-lingual transfer in Arabic language models.

Abstract

Arabic Language Models (LMs) are pretrained predominately on Modern Standard Arabic (MSA) and are expected to transfer to its dialects. While MSA as the standard written variety is commonly used in formal settings, people speak and write online in various dialects that are spread across the Arab region. This poses limitations for Arabic LMs, since its dialects vary in their similarity to MSA. In this work we study cross-lingual transfer of Arabic models using probing on 3 Natural Language Processing (NLP) Tasks, and representational similarity. Our results indicate that transfer is possible but disproportionate across dialects, which we find to be partially explained by their geographic proximity. Furthermore, we find evidence for negative interference in models trained to support all Arabic dialects. This questions their degree of similarity, and raises concerns for cross-lingual transfer in Arabic models.