The Cognitive Divergence: AI Context Windows, Human Attention Decline, and the Delegation Feedback Loop
arXiv cs.AI / 3/31/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that rapidly expanding LLM context windows (rising from 512 tokens in 2017 to about 2,000,000 tokens by 2026) are diverging from a long-term decline in humans’ sustained-attention capacity, quantified as Effective Context Span (ECS).
- It estimates ECS has fallen from roughly 16,000 tokens (2004 baseline) to around 1,800 tokens by 2026, using token-equivalent measures derived from reading-rate meta-analyses and longitudinal behavioral data.
- The authors describe a growing AI-to-human information ratio—shifting from near parity at the ChatGPT launch (Nov 2022) to hundreds-to-over-a-thousand times in raw terms and tens-to-over-a-hundred times quality-adjusted—after accounting for retrieval degradation.
- It proposes a “Delegation Feedback Loop” hypothesis: as AI capabilities improve, people delegate to AI at lower cognitive thresholds, potentially reducing cognitive practice and further weakening the capacities already trending downward.
- The paper surveys neurobiological mechanisms and lays out a research agenda focused on a validated ECS psychometric instrument and longitudinal studies of AI-mediated cognitive change.
Related Articles

Black Hat Asia
AI Business
[D] How does distributed proof of work computing handle the coordination needs of neural network training?
Reddit r/MachineLearning

Claude Code's Entire Source Code Was Just Leaked via npm Source Maps — Here's What's Inside
Dev.to

BYOK is not just a pricing model: why it changes AI product trust
Dev.to

AI Citation Registries and Identity Persistence Across Records
Dev.to