One Model to Translate Them All? A Journey to Mount Doom for Multilingual Model Merging

arXiv cs.CL / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies weight-space model merging for multilingual machine translation, aiming to understand why merging strategies that work in multitask settings can fail across languages.
  • Through full fine-tuning on large bilingual corpora and evaluation of standard merging methods, the authors find that merging typically degrades performance, with the drop being especially severe when target languages differ.
  • The analysis uses neuron-selectivity measures (span-conditioned) and layer-wise centered kernel alignment to show that language-specific neurons are concentrated in embedding layers and upper transformer blocks, while intermediate layers stay comparatively shared.
  • Fine-tuning is shown to redistribute rather than sharpen language selectivity, making supervised/related language neurons less exclusive and pushing unsupervised-language neurons to become more isolated.
  • The resulting representational divergence in higher layers undermines the geometric assumptions that make weight-space merging effective, providing a mechanistic explanation for multilingual merging failure.

Abstract

Weight-space model merging combines independently fine-tuned models without accessing original training data, offering a practical alternative to joint training. While merging succeeds in multitask settings, its behavior in multilingual contexts remains poorly understood. We systematically study weight-space merging for multilingual machine translation by fully fine-tuning language model on large-scale bilingual corpora and evaluating standard merging strategies. Our experiments reveal that merging degrades performance, especially when target languages differ. To explain this failure, we analyze internal representations using span-conditioned neuron selectivity and layer-wise centered kernel alignment. We find that language-specific neurons concentrate in embedding layers and upper transformer blocks, while intermediate layers remain largely shared across languages. Critically, fine-tuning redistributes rather than sharpens language selectivity: neurons for supervised and related languages become less exclusive, while those for unsupervised languages grow more isolated. This redistribution increases representational divergence in higher layers that govern generation. These findings suggest that multilingual fine-tuning may reshape geometry in ways that reduce compatibility with standard weight-space merging assumptions. Our work thus provides an explanation for why merging fails in multilingual translation scenarios.