COMPASS: COntinual Multilingual PEFT with Adaptive Semantic Sampling

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The COMPASS framework targets uneven LLM performance across languages by reducing negative cross-lingual interference during multilingual adaptation.
  • It uses parameter-efficient fine-tuning (PEFT) with language-specific lightweight adapters, trained on a carefully selected subset of auxiliary multilingual data.
  • COMPASS applies distribution-aware sampling: multilingual embeddings are clustered to find semantic gaps between current training coverage and the target usage distribution, then prioritizes under-represented semantic clusters.
  • It extends to continual learning via COMPASS-ECDA, which monitors production data distribution shifts and dynamically updates adapters to prevent model staleness while preserving prior knowledge.
  • Experiments across multiple architectures and multilingual benchmarks show COMPASS outperforms baselines that rely on linguistic similarity, including performance on unseen long-context tasks.

Abstract

Large language models (LLMs) often exhibit performance disparities across languages, with naive multilingual fine-tuning frequently degrading performance due to negative cross-lingual interference. To address this, we introduce COMPASS (COntinual Multilingual PEFT with Adaptive Semantic Sampling), a novel data-centric framework for adapting LLMs to target languages. COMPASS leverages parameter-efficient fine-tuning (PEFT) by training lightweight, language-specific adapters on a judiciously selected subset of auxiliary multilingual data. The core of our method is a distribution-aware sampling strategy that uses multilingual embeddings and clustering to identify semantic gaps between existing training data and a target usage distribution. By prioritizing auxiliary data from under-represented semantic clusters, COMPASS maximizes positive cross-lingual transfer while minimizing interference. We extend this into a continual learning framework, COMPASS-ECDA, which monitors for data distribution shifts in production and dynamically updates adapters to prevent model staleness, balancing adaptation to new data with the preservation of existing knowledge. Across three different model architectures (Phi-4-Mini, Llama-3.1-8B, and Qwen2.5-7B) and multiple challenging multilingual benchmarks (Global-MMLU, MMLU-ProX), including unseen long-context tasks (OneRuler), we demonstrate that COMPASS consistently outperforms baseline methods guided by linguistic similarity, providing an effective, efficient, and sustainable solution for developing and maintaining high-performing multilingual models in dynamic environments.