Merge and Conquer: Instructing Multilingual Models by Adding Target Language Weights
arXiv cs.CL / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses how to improve instruction-following and language performance of LLMs in low-resource languages that are typically underrepresented in English-centric models.
- It proposes using model merging to transfer language knowledge by combining an instruction-tuned LLM with a language-specific base model, avoiding the need for new language-specific instruction datasets and repeated fine-tuning.
- Experiments on Basque, Catalan, Galician, and Spanish across two model families show that merging can produce effective instruction-following in newly targeted languages.
- The authors also demonstrate that merging multiple language-specific models can yield multilingual capability, suggesting a scalable way to compose strengths across languages.
- Overall, the work concludes that model merging can be a computationally efficient alternative to continual pre-training for low-resource language adaptation while maintaining competitive results.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to