Convergent Representations of Linguistic Constructions in Human and Artificial Neural Systems

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares how human brains and artificial neural language models represent argument structure constructions (ASCs) during sentence processing.
  • EEG recordings from 10 participants listening to synthetically generated sentences show construction-specific neural signatures that emerge mainly at sentence-final positions when argument structure is fully disambiguated.
  • Using time-frequency analysis and machine learning classification, the study finds the strongest and most reliable effects in the alpha band and particularly clear differentiation between ditransitive and resultative constructions.
  • The timing and similarity structure of human neural effects align with patterns seen in recurrent and transformer-based models, suggesting convergence on similar representational solutions during integrative processing.
  • The results are interpreted as supporting Construction Grammar-style distinct form-meaning mappings and the broader idea of “Platonic representational space” where learning systems find stable regions that enable efficient abstractions.

Abstract

Understanding how the brain processes linguistic constructions is a central challenge in cognitive neuroscience and linguistics. Recent computational studies show that artificial neural language models spontaneously develop differentiated representations of Argument Structure Constructions (ASCs), generating predictions about when and how construction-level information emerges during processing. The present study tests these predictions in human neural activity using electroencephalography (EEG). Ten native English speakers listened to 200 synthetically generated sentences across four construction types (transitive, ditransitive, caused-motion, resultative) while neural responses were recorded. Analyses using time-frequency methods, feature extraction, and machine learning classification revealed construction-specific neural signatures emerging primarily at sentence-final positions, where argument structure becomes fully disambiguated, and most prominently in the alpha band. Pairwise classification showed reliable differentiation, especially between ditransitive and resultative constructions, while other pairs overlapped. Crucially, the temporal emergence and similarity structure of these effects mirror patterns in recurrent and transformer-based language models, where constructional representations arise during integrative processing stages. These findings support the view that linguistic constructions are neurally encoded as distinct form-meaning mappings, in line with Construction Grammar, and suggest convergence between biological and artificial systems on similar representational solutions. More broadly, this convergence is consistent with the idea that learning systems discover stable regions within an underlying representational landscape - recently termed a Platonic representational space - that constrains the emergence of efficient linguistic abstractions.