Positional Cognitive Specialization: Where Do LLMs Learn To Comprehend and Speak Your Language?
arXiv cs.CL / 4/3/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how decoder-only LLMs acquire comprehension and generation abilities in low-resource languages, focusing on internal “cognitive specialization” during training rather than post-hoc interpretability.
- Using layer ablation sweeps from input (perception) and output (production) sides, the authors find that different model regions develop distinct roles for understanding versus speaking a target language.
- Based on these specialization patterns, the authors propose CogSym, a layer-wise fine-tuning heuristic that selectively updates only a small set of early and late layers.
- The method shows that tuning only the outer 25% of layers can reach downstream task performance within 2–3% of full fine-tuning, while also performing consistently with adapter-style approaches like LoRA.
- Overall, the work provides actionable insight for cheaper, less opaque multilingual adaptation and aims to make language modeling more accessible for diverse languages.
Related Articles

I Built a Voice AI with Sub-500ms Latency. Here's the Echo Cancellation Problem Nobody Talks About
Dev.to

LLM Semantic Caching: The 95% Hit Rate Myth (and What Production Data Actually Shows)
Dev.to
Inside the Creative Artificial Intelligence (AI) Stack: Where Human Vision and Artificial Intelligence Meet to Design Future Fashion
MarkTechPost

AI Citation Volatility: Why 60% of Your Sources Disappear Every Month
Dev.to

90% людей выбирают нейросети наугад. И теряют время.
Dev.to