SinkRouter: Sink-Aware Routing for Efficient Long-Context Decoding in Large Language and Multimodal Models

arXiv cs.LG / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper argues that long-context decoding in large language and multimodal models is often bottlenecked by GPU memory bandwidth due to repeated KV-cache loads per decoding step.
  • It links the “attention sink” phenomenon to a stable, reachable, and error-controllable fixed point that emerges during training, offering a more mechanistic explanation than prior heuristics.
  • Based on this insight, the authors propose SinkRouter, a training-free selective routing method that detects sink signals and skips computations likely to produce near-zero outputs.
  • To make the approach practical on real hardware, they implement a hardware-aware Triton kernel using block-level branching and Split-K parallelism.
  • Experiments on multiple long-context benchmarks and both text-only and multimodal backbones show consistent efficiency gains, including up to 2.03× speedup at 512K context with competitive accuracy.

Abstract

In long-context decoding for LLMs and LMMs, attention becomes increasingly memory-bound because each decoding step must load a large amount of KV-cache data from GPU memory. Existing acceleration strategies often trade efficiency for accuracy by relying on heuristic pruning that may discard useful information. At a deeper level, they also tend to indiscriminately preserve all high-scoring tokens, treat early tokens as indispensable anchors, or rely on heuristic head routing, reflecting an insufficient mechanistic understanding of the attention sink phenomenon. In this paper, we show that the attention sink phenomenon corresponds to a stable, reachable, and error-controllable fixed point constructed during training. Based on this insight, we propose SinkRouter, a training-free selective routing framework that detects the sink signal and skips computations that would otherwise produce near-zero output. To translate this mechanism into real-world acceleration, we develop a hardware-aware Triton kernel with block-level branching and Split-K parallelism. We conduct extensive evaluations on a diverse suite of long-context benchmarks, including LongBench, InfiniteBench, CVBench, MileBench, and MMVP, using both text-only and multimodal backbones such as Llama-3.1-8B, Llama-3.1-70B, Yi-9B-200K, LLaVA-1.5-7B, and LLaVA-1.5-13B. Across these settings, SinkRouter consistently improves decoding efficiency while maintaining competitive accuracy, and reaches 2.03x speedup with a 512K context.