MSA: Memory Sparse Attention for Efficient End-to-End Memory Model Scaling to 100M Tokens

arXiv cs.CL / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Memory Sparse Attention (MSA), an end-to-end trainable memory model framework designed to scale LLM-like long-term context far beyond typical full-attention limits (up to 100M tokens).
  • MSA targets prior long-context approaches’ bottlenecks—precision degradation, rising latency, and limited ability to dynamically modify memory—by using scalable sparse attention and document-wise RoPE to achieve linear complexity.
  • Experiments report strong stability, with less than 9% degradation when scaling from 16K to 100M tokens, suggesting practical feasibility for lifetime-scale memory use cases.
  • The method adds KV-cache compression plus “Memory Parallel” to run 100M-token inference on 2×A800 GPUs, and “Memory Interleaving” to support multi-hop reasoning across separated memory segments.
  • The authors claim MSA outperforms frontier LLMs, state-of-the-art RAG systems, and memory-agent approaches on long-context benchmarks, positioning it as a route to intrinsic, lifetime-scale memory.

Abstract

Long-term memory is a cornerstone of human intelligence. Enabling AI to process lifetime-scale information remains a long-standing pursuit in the field. Due to the constraints of full-attention architectures, the effective context length of large language models (LLMs) is typically limited to 1M tokens. Existing approaches, such as hybrid linear attention, fixed-size memory states (e.g., RNNs), and external storage methods like RAG or agent systems, attempt to extend this limit. However, they often suffer from severe precision degradation and rapidly increasing latency as context length grows, an inability to dynamically modify memory content, or a lack of end-to-end optimization. These bottlenecks impede complex scenarios like large-corpus summarization, Digital Twins, and long-history agent reasoning, while limiting memory capacity and slowing inference. We present Memory Sparse Attention (MSA), an end-to-end trainable, efficient, and massively scalable memory model framework. Through core innovations including scalable sparse attention and document-wise RoPE, MSA achieves linear complexity in both training and inference while maintaining exceptional stability, exhibiting less than 9% degradation when scaling from 16K to 100M tokens. Furthermore, KV cache compression, combined with Memory Parallel, enables 100M-token inference on 2xA800 GPUs. We also propose Memory Interleaving to facilitate complex multi-hop reasoning across scattered memory segments. MSA significantly surpasses frontier LLMs, state-of-the-art RAG systems, and leading memory agents in long-context benchmarks. These results demonstrate that by decoupling memory capacity from reasoning, MSA provides a scalable foundation to endow general-purpose models with intrinsic, lifetime-scale memory.