COMPASS: COntinual Multilingual PEFT with Adaptive Semantic Sampling
arXiv cs.LG / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The COMPASS framework targets uneven LLM performance across languages by reducing negative cross-lingual interference during multilingual adaptation.
- It uses parameter-efficient fine-tuning (PEFT) with language-specific lightweight adapters, trained on a carefully selected subset of auxiliary multilingual data.
- COMPASS applies distribution-aware sampling: multilingual embeddings are clustered to find semantic gaps between current training coverage and the target usage distribution, then prioritizes under-represented semantic clusters.
- It extends to continual learning via COMPASS-ECDA, which monitors production data distribution shifts and dynamically updates adapters to prevent model staleness while preserving prior knowledge.
- Experiments across multiple architectures and multilingual benchmarks show COMPASS outperforms baselines that rely on linguistic similarity, including performance on unseen long-context tasks.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to