Positional Cognitive Specialization: Where Do LLMs Learn To Comprehend and Speak Your Language?

arXiv cs.CL / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how decoder-only LLMs acquire comprehension and generation abilities in low-resource languages, focusing on internal “cognitive specialization” during training rather than post-hoc interpretability.
  • Using layer ablation sweeps from input (perception) and output (production) sides, the authors find that different model regions develop distinct roles for understanding versus speaking a target language.
  • Based on these specialization patterns, the authors propose CogSym, a layer-wise fine-tuning heuristic that selectively updates only a small set of early and late layers.
  • The method shows that tuning only the outer 25% of layers can reach downstream task performance within 2–3% of full fine-tuning, while also performing consistently with adapter-style approaches like LoRA.
  • Overall, the work provides actionable insight for cheaper, less opaque multilingual adaptation and aims to make language modeling more accessible for diverse languages.

Abstract

Adapting large language models (LLMs) to new languages is an expensive and opaque process. Understanding how language models acquire new languages and multilingual abilities is key to achieve efficient adaptation. Prior work on multilingual interpretability research focuses primarily on how trained models process multilingual instructions, leaving unexplored the mechanisms through which they acquire new languages during training. We investigate these training dynamics on decoder-only transformers through the lens of two functional cognitive specializations: language perception (input comprehension) and production (output generation). Through experiments on low-resource languages, we demonstrate how perceptual and productive specialization emerges in different regions of a language model by running layer ablation sweeps from the model's input and output directions. Based on the observed specialization patterns, we propose CogSym, a layer-wise heuristic that enables effective adaptation by exclusively fine-tuning a few early and late layers. We show that tuning only the 25% outermost layers achieves downstream task performance within 2-3% deviation from the full fine-tuning baseline. CogSym yields consistent performance with adapter methods such as LoRA, showcasing generalization beyond full fine-tuning. These findings provide insights to better understand how LLMs learn new languages and push toward accessible and inclusive language modeling.