Brain Score Tracks Shared Properties of Languages: Evidence from Many Natural Languages and Structured Sequences
arXiv cs.CL / 4/20/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether neural language models process language in ways that resemble human language processing, using the Brain Score (BS) framework that maps language model activations to fMRI responses during reading.
- Experiments show that language models trained on many natural languages across diverse language families achieve very similar BS performance.
- The study finds that models trained on certain structured non-language inputs—such as genome sequences, Python code, and hierarchical nested parentheses—also score reasonably on BS and sometimes come close to natural-language results.
- Overall, the results suggest BS can reveal whether models capture shared structural regularities, but high BS alone may not be sufficient to conclude that processing is human-like.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA