Working Memory Constraints Scaffold Learning in Transformers under Data Scarcity

arXiv cs.CL / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The study augments Transformer attention with human-like working memory constraints, implementing fixed-width window attention and temporal-decay attention variants.
  • Researchers train modified GPT-2 models from scratch on developmentally plausible datasets of 10M and 100M words to test robustness under data scarcity.
  • On grammatical judgment benchmarks (BLiMP) and comparisons to human reading-time data, the constrained attention models—especially fixed-width attention—improve grammatical accuracy.
  • Constrained models show stronger alignment with human processing metrics, suggesting working-memory-inspired limits act as a beneficial inductive bias for language representation.
  • The findings indicate that adding cognitive constraints to architectures may be a practical route to better performance when available training data is limited.

Abstract

We investigate the integration of human-like working memory constraints into the Transformer architecture and implement several cognitively inspired attention variants, including fixed-width windows based and temporal decay based attention mechanisms. Our modified GPT-2 models are trained from scratch on developmentally plausible datasets (10M and 100M words). Performance is evaluated on grammatical judgment tasks (BLiMP) and alignment with human reading time data. Our results indicate that these cognitively-inspired constraints, particularly fixed-width attention, can significantly improve grammatical accuracy especially when training data is scarce. These constrained models also tend to show a stronger alignment with human processing metrics. The findings suggest that such constraints may serve as a beneficial inductive bias, guiding models towards more robust linguistic representations, especially in data-limited settings.