Human-like Working Memory Interference in Large Language Models

arXiv cs.LG / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines why large language models (LLMs) exhibit working-memory limitations even though transformers can attend to full prior context.
  • Experiments show that pretrained LLMs reproduce human-like interference patterns, including performance degradation under higher memory load and biases driven by recency and stimulus statistics.
  • A key result is that LLMs rely on interference control over entangled representations of multiple memory items, rather than directly copying the target item from the context.
  • The authors provide causal evidence via targeted intervention: suppressing stimulus-content information improves working-memory task performance.
  • Across models, stronger working-memory capacity correlates with broader benchmark competence, suggesting working memory as a shared computational constraint linked to general intelligence.

Abstract

Intelligent systems must maintain and manipulate task-relevant information online to adapt to dynamic environments and changing goals. This capacity, known as working memory, is fundamental to human reasoning and intelligence. Despite having on the order of 100 billion neurons, both biological and artificial systems exhibit limitations in working memory. This raises a key question: why do large language models (LLMs) show such limitations, given that transformers have full access to prior context through attention? We find that although a two-layer transformer can be trained to solve working memory tasks perfectly, a diverse set of pretrained LLMs continues to show working memory limitations. Notably, LLMs reproduce interference signatures observed in humans: performance degrades with increasing memory load and is biased by recency and stimulus statistics. Across models, stronger working memory capacity correlates with broader competence on standard benchmarks, mirroring its link to general intelligence in humans. Yet despite substantial variability in working memory performance, LLMs surprisingly converge on a common computational mechanism. Rather than directly copying the relevant memory item from context, models encode multiple memory items in entangled representations, such that successful recall depends on interference control -- actively suppressing task-irrelevant content to isolate the target for readout. Moreover, a targeted intervention that suppresses stimulus content information improves performance, providing causal support for representational interference. Together, these findings identify representational interference as a core constraint on working memory in pretrained LLMs, suggesting that working-memory limits in biological and artificial systems may reflect a shared computational challenge: selecting task-relevant information under interference.