Computational Lesions in Multilingual Language Models Separate Shared and Language-specific Brain Alignment
arXiv cs.CL / 4/14/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The study addresses how multilingual language processing is represented in the brain by using six multilingual LLMs as controllable proxies for neural mechanisms.
- Researchers introduce targeted “computational lesions” by zeroing parameter subsets that are either shared across languages or specific to one language, then compare model behavior to human fMRI.
- Whole-brain encoding correlation drops sharply (by 60.32%) when a compact shared core is lesioned, indicating a causal role for shared parameters in brain alignment.
- Language-specific lesions keep cross-language separation in embedding space but reduce brain predictivity for the native/matched language, suggesting embedded specializations.
- The results support a “shared backbone with language-specialized components” framework and propose a causal approach for multilingual brain–model alignment research.
Related Articles

As China’s biotech firms shift gears, can AI floor the accelerator?
SCMP Tech

AI startup claims to automate app making but actually just uses humans
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

"OpenAI Codex Just Got Computer Use, Image Gen, and 90 Plugins. 3 Things Nobody's Telling You."
Dev.to

AMBER: An LLM-free Multi-dimensional Benchmark for MLLMs HallucinationEvaluation
Dev.to