AsyncTLS: Efficient Generative LLM Inference with Asynchronous Two-level Sparse Attention

arXiv cs.CL / 4/10/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • End-to-end throughput gains of 1.3x–4.7x are reported for 48k–96k context lengths, indicating practical benefits for long-context deployment.

Abstract

Long-context inference in LLMs faces the dual challenges of quadratic attention complexity and prohibitive KV cache memory. While token-level sparse attention offers superior accuracy, its indexing overhead is costly; block-level methods improve efficiency but sacrifice precision. We propose AsyncTLS, a hierarchical sparse attention system that combines coarse-grained block filtering with fine-grained token selection to balance accuracy and efficiency, coupled with an asynchronous offloading engine that overlaps KV cache transfers with computation via temporal locality exploitation. Evaluated on Qwen3 and GLM-4.7-Flash across GQA, and MLA architectures, AsyncTLS achieves accuracy comparable to full attention while delivering 1.2x - 10.0x operator speedups and 1.3x - 4.7x end-to-end throughput improvements on 48k - 96k contexts.