Brain Score Tracks Shared Properties of Languages: Evidence from Many Natural Languages and Structured Sequences

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether neural language models process language in ways that resemble human language processing, using the Brain Score (BS) framework that maps language model activations to fMRI responses during reading.
  • Experiments show that language models trained on many natural languages across diverse language families achieve very similar BS performance.
  • The study finds that models trained on certain structured non-language inputs—such as genome sequences, Python code, and hierarchical nested parentheses—also score reasonably on BS and sometimes come close to natural-language results.
  • Overall, the results suggest BS can reveal whether models capture shared structural regularities, but high BS alone may not be sufficient to conclude that processing is human-like.

Abstract

Recent breakthroughs in language models (LMs) using neural networks have raised the question: how similar are these models' processing to human language processing? Results using a framework called Brain Score (BS) -- predicting fMRI activations during reading from LM activations -- have been used to argue for a high degree of similarity. To understand this similarity, we conduct experiments by training LMs on various types of input data and evaluate them on BS. We find that models trained on various natural languages from many different language families have very similar BS performance. LMs trained on other structured data -- the human genome, Python, and pure hierarchical structure (nested parentheses) -- also perform reasonably well and close to natural languages in some cases. These findings suggest that BS can highlight language models' ability to extract common structure across natural languages, but that the metric may not be sensitive enough to allow us to infer human-like processing from a high BS score alone.