Forget, Then Recall: Learnable Compression and Selective Unfolding via Gist Sparse Attention

arXiv cs.LG / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • The paper addresses the quadratic cost of full attention in long-context language models by proposing an end-to-end learnable alternative to standalone KV-cache selection or compression methods.
  • It introduces interleaved “gist” compression tokens that act as routing signals for sparse attention, enabling a coarse-to-fine workflow.
  • The proposed “selective unfolding via GSA” first compresses the input into gist tokens, selects the most relevant gists, and then restores only the corresponding raw chunks for detailed attention.
  • The method is trained end-to-end without external retrieval modules and is further extended with recursive gist-of-gist construction for multi-resolution context access with logarithmic per-step decoding complexity.
  • Experiments on LongBench and RAG benchmarks show consistent gains over other compression baselines and inference-time sparse attention approaches across 8× to 32× compression ratios, with code released on GitHub.

Abstract

Scaling large language models to long contexts is challenging due to the quadratic computational cost of full attention. Mitigation approaches include KV-cache selection or compression techniques. We instead provide an effective and end-to-end learnable bridge between the two without requiring architecture modification. In particular, our key insight is that interleaved gist compression tokens -- which provide a learnable summary of sets of raw tokens -- can serve as routing signals for sparse attention. Building on this, we introduce selective unfolding via GSA, which first compresses the context into gist tokens, then selects the most relevant gists, and subsequently restores the corresponding raw chunks for detailed attention. This yields a simple coarse-to-fine mechanism that combines compact global representations with targeted access to fine-grained evidence. We further incorporate this process directly into training in an end-to-end fashion, avoiding the need for external retrieval modules. In addition, we extend the framework hierarchically via recursive gist-of-gist construction, enabling multi-resolution context access with logarithmic per-step decoding complexity. Empirical results on LongBench and RAG benchmarks demonstrate that our method consistently outperforms other compression baselines as well as inference-time sparse attention methods across compression ratios from 8\times to 32\times. The code is available at: https://github.com/yuzhenmao/gist-sparse-attention/