TriAttention: Efficient Long Reasoning with Trigonometric KV Compression

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the KV cache memory bottleneck in long-form reasoning for LLMs by improving KV cache compression and key-importance estimation.
  • It argues that common approaches relying on recent post-RoPE attention scores fail because RoPE rotates Q/K with position, producing unstable and unrepresentative “top-key” selection.
  • TriAttention instead operates in the pre-RoPE space, leveraging an observed concentration of Q and K vectors around fixed non-zero centers that yields stable, distance-preferring attention behavior.
  • Using a trigonometric series derived from the concentration centers (plus Q/K norm as an auxiliary signal), TriAttention scores and keeps keys more effectively for reasoning.
  • Experiments on AIME25 (32K-token generation) show TriAttention matches full-attention reasoning accuracy while improving throughput by 2.5x and/or reducing KV memory by 10.7x, enabling OpenClaw deployment on a single consumer GPU without OOM for long contexts.

Abstract

Extended reasoning in large language models (LLMs) creates severe KV cache memory bottlenecks. Leading KV cache compression methods estimate KV importance using attention scores from recent post-RoPE queries. However, queries rotate with position during RoPE, making representative queries very few, leading to poor top-key selection and unstable reasoning. To avoid this issue, we turn to the pre-RoPE space, where we observe that Q and K vectors are highly concentrated around fixed non-zero centers and remain stable across positions -- Q/K concentration. We show that this concentration causes queries to preferentially attend to keys at specific distances (e.g., nearest keys), with the centers determining which distances are preferred via a trigonometric series. Based on this, we propose TriAttention to estimate key importance by leveraging these centers. Via the trigonometric series, we use the distance preference characterized by these centers to score keys according to their positions, and also leverage Q/K norms as an additional signal for importance estimation. On AIME25 with 32K-token generation, TriAttention matches Full Attention reasoning accuracy while achieving 2.5x higher throughput or 10.7x KV memory reduction, whereas leading baselines achieve only about half the accuracy at the same efficiency. TriAttention enables OpenClaw deployment on a single consumer GPU, where long context would otherwise cause out-of-memory with Full Attention.