The grip of grammar on meaning uncertainty: cross-linguistic evidence, neural correlates, and clinical relevance
arXiv cs.CL / 5/5/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The study argues that isolated word meanings are inherently uncertain, and that combining words in context—especially through grammar—systematically reduces this uncertainty across languages.
- The authors quantify “uncertainty compression” by comparing non-contextual surprisal (from lexical frequency) with contextual surprisal computed from grammar-sensitive neural models.
- Evidence from narrative data in 20 languages shows that contextual surprisal drops relative to frequency-based surprisal, and that this decrease mirrors the processing cost of reversing word order.
- fMRI results indicate that both surprisal and its grammar-driven reduction predict brain activity during comprehension and production, with overlapping but distinct neural regions.
- The uncertainty reduction effect is significantly weakened in aphasia, dementia, and schizophrenia, while remaining intact in cases where the primary deficit is not language, suggesting clinical relevance for grammar-based mechanisms of meaning.
Related Articles

Singapore's Fraud Frontier: Why AI Scam Detection Demands Regulatory Precision
Dev.to

Meta will use AI to analyze height and bone structure to identify if users are underage
TechCrunch

Google, Microsoft, and xAI will allow the US government to review their new AI models
The Verge

How AI is Changing the Way We Code in 2026: The Shift from Syntax to Strategy
Dev.to

ElevenLabs lists BlackRock, Jamie Foxx and Longoria as new investors
TechCrunch