Unveiling Language Routing Isolation in Multilingual MoE Models for Interpretable Subnetwork Adaptation

arXiv cs.CL / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates why multilingual Mixture-of-Experts (MoE) models show uneven performance across languages by analyzing expert routing behavior inside the model.
  • It identifies a new pattern called “Language Routing Isolation,” where high- and low-resource languages tend to activate largely disjoint sets of experts.
  • Layer-wise analysis reveals a convergence–divergence routing structure across depth, suggesting routing dynamics change systematically from shallow to deep layers.
  • The authors introduce RISE (Routing Isolation-guided Subnetwork Enhancement), which selects language-specific and universal expert subnetworks using specificity and overlap scores.
  • By training only the selected subnetworks and freezing the rest, RISE improves low-resource language F1 by up to 10.85% across 10 languages with minimal degradation on other languages.

Abstract

Mixture-of-Experts (MoE) models exhibit striking performance disparities across languages, yet the internal mechanisms driving these gaps remain poorly understood. In this work, we conduct a systematic analysis of expert routing patterns in MoE models, revealing a phenomenon we term Language Routing Isolation, in which high- and low-resource languages tend to activate largely disjoint expert sets. Through layer-stratified analysis, we further show that routing patterns exhibit a layer-wise convergence-divergence pattern across model depth. Building on these findings, we propose RISE (Routing Isolation-guided Subnetwork Enhancement), a framework that exploits routing isolation to identify and adapt language-specific expert subnetworks. RISE applies a tripartite selection strategy, using specificity scores to identify language-specific experts in shallow and deep layers and overlap scores to select universal experts in middle layers. By training only the selected subnetwork while freezing all other parameters, RISE substantially improves low-resource language performance while preserving capabilities in other languages. Experiments on 10 languages demonstrate that RISE achieves target-language F1 gains of up to 10.85% with minimal cross-lingual degradation.