Latent-Condensed Transformer for Efficient Long Context Modeling

arXiv cs.CL / 4/15/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Latent-Condensed Attention (LCA), a new long-context attention method designed to work directly within Multi-head Latent Attention (MLA)’s compressed latent space rather than treating cache and compute reductions separately.
  • LCA reduces both key-value (KV) cache size and attention computation by aggregating semantic latent vectors via query-aware pooling while preserving positional information through anchor selection.
  • The method adds no new parameters and is architecture-agnostic, making it extendable beyond MLA to other attention variants such as GQA.
  • The authors provide a theoretical length-independent error bound for the approach.
  • Experiments report up to 2.5× faster prefill and about a 90% KV cache reduction at 128K context length while keeping performance competitive.

Abstract

Large language models (LLMs) face significant challenges in processing long contexts due to the linear growth of the key-value (KV) cache and quadratic complexity of self-attention. Existing approaches address these bottlenecks separately: Multi-head Latent Attention (MLA) reduces the KV cache by projecting tokens into a low-dimensional latent space, while sparse attention reduces computation. However, sparse methods cannot operate natively on MLA's compressed latent structure, missing opportunities for joint optimization. In this paper, we propose Latent-Condensed Attention (LCA), which directly condenses context within MLA's latent space, where the representation is disentangled into semantic latent vectors and positional keys. LCA separately aggregates semantic vectors via query-aware pooling and preserves positional keys via anchor selection. This approach jointly reduces both computational cost and KV cache without adding parameters. Beyond MLA, LCA's design is architecture-agnostic and readily extends to other attention mechanisms such as GQA. Theoretically, we prove a length-independent error bound. Experiments show LCA achieves up to 2.5\times prefilling speedup and 90% KV cache reduction at 128K context while maintaining competitive performance.