Working Memory Constraints Scaffold Learning in Transformers under Data Scarcity
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study augments Transformer attention with human-like working memory constraints, implementing fixed-width window attention and temporal-decay attention variants.
- Researchers train modified GPT-2 models from scratch on developmentally plausible datasets of 10M and 100M words to test robustness under data scarcity.
- On grammatical judgment benchmarks (BLiMP) and comparisons to human reading-time data, the constrained attention models—especially fixed-width attention—improve grammatical accuracy.
- Constrained models show stronger alignment with human processing metrics, suggesting working-memory-inspired limits act as a beneficial inductive bias for language representation.
- The findings indicate that adding cognitive constraints to architectures may be a practical route to better performance when available training data is limited.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to