Convergent Representations of Linguistic Constructions in Human and Artificial Neural Systems
arXiv cs.CL / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper compares how human brains and artificial neural language models represent argument structure constructions (ASCs) during sentence processing.
- EEG recordings from 10 participants listening to synthetically generated sentences show construction-specific neural signatures that emerge mainly at sentence-final positions when argument structure is fully disambiguated.
- Using time-frequency analysis and machine learning classification, the study finds the strongest and most reliable effects in the alpha band and particularly clear differentiation between ditransitive and resultative constructions.
- The timing and similarity structure of human neural effects align with patterns seen in recurrent and transformer-based models, suggesting convergence on similar representational solutions during integrative processing.
- The results are interpreted as supporting Construction Grammar-style distinct form-meaning mappings and the broader idea of “Platonic representational space” where learning systems find stable regions that enable efficient abstractions.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to