Why Better Cross-Lingual Alignment Fails for Better Cross-Lingual Transfer: Case of Encoders
arXiv cs.CL / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that better cross-lingual alignment often fails to improve token-level downstream transfer despite increasing embedding similarity.
- It analyzes four XLM-R encoder models aligned on different language pairs and fine-tuned for POS tagging or sentence classification using representational analyses like embedding distances, gradient similarities, and gradient magnitudes.
- The results reveal that embedding distances are unreliable predictors of task performance and that alignment and task gradients are largely orthogonal, meaning optimizing one objective may contribute little to the other.
- Based on these insights, the authors provide practical guidelines for combining cross-lingual alignment with task-specific fine-tuning and emphasize careful loss selection.
Related Articles

Attacks On Data Centers, Qwen3.5 In All Sizes, DeepSeek’s Huawei Play, Apple’s Multimodal Tokenizer
The Batch

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

KI in der amtlichen Recherche beim DPMA: Was Patentanwälte bei Neuanmeldungen jetzt beachten sollten (Stand: März 2026)
Dev.to