Neither Here Nor There: Cross-Lingual Representation Dynamics of Code-Mixed Text in Multilingual Encoders

arXiv cs.CL / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates cross-lingual representations in multilingual encoders for Hindi-English code-mixed inputs, finding that code-mixed representations are loosely connected to either constituent language and tend to an English-dominant semantic subspace.
  • The authors construct a unified trilingual corpus with English, Devanagari Hindi, and Romanized code-mixed sentences and analyze alignment using CKA, token-level saliency, and entropy-based uncertainty analyses.
  • Continued pre-training on code-mixed data improves English-code-mixed alignment but reduces English-Hindi alignment, revealing a trade-off in multilingual pre-training objectives.
  • They introduce a trilingual post-training alignment objective that brings code-mixed representations closer to both languages, yielding downstream gains on sentiment analysis and hate speech detection.

Abstract

Multilingual encoder-based language models are widely adopted for code-mixed analysis tasks, yet we know surprisingly little about how they represent code-mixed inputs internally - or whether those representations meaningfully connect to the constituent languages being mixed. Using Hindi-English as a case study, we construct a unified trilingual corpus of parallel English, Hindi (Devanagari), and Romanized code-mixed sentences, and probe cross-lingual representation alignment across standard multilingual encoders and their code-mixed adapted variants via CKA, token-level saliency, and entropy-based uncertainty analysis. We find that while standard models align English and Hindi well, code-mixed inputs remain loosely connected to either language - and that continued pre-training on code-mixed data improves English-code-mixed alignment at the cost of English-Hindi alignment. Interpretability analyses further reveal a clear asymmetry: models process code-mixed text through an English-dominant semantic subspace, while native-script Hindi provides complementary signals that reduce representational uncertainty. Motivated by these findings, we introduce a trilingual post-training alignment objective that brings code-mixed representations closer to both constituent languages simultaneously, yielding more balanced cross-lingual alignment and downstream gains on sentiment analysis and hate speech detection - showing that grounding code-mixed representations in their constituent languages meaningfully helps cross-lingual understanding.