AI Navigate

Zipage: Maintain High Request Concurrency for LLM Reasoning through Compressed PagedAttention

arXiv cs.AI / 3/11/2026

Developer Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper presents Compressed PagedAttention, a novel approach that combines token-wise KV cache eviction with PagedAttention to address the memory bottleneck in KV cache during LLM decoding.
  • Zipage, a high-concurrency LLM inference engine built on Compressed PagedAttention, supports prefix caching and asynchronous compression to optimize reasoning tasks.
  • In large-scale mathematical reasoning benchmarks, Zipage achieves approximately 95% of the performance of full KV inference engines while offering over 2.1 times speedup, significantly improving request concurrency.
  • The proposed scheduling strategy and memory optimization techniques make Zipage practical and efficient for industrial-grade applications where high concurrency and memory constraints are critical.
  • This innovation directly targets improving LLM inference efficiency during reasoning, enabling more scalable and faster service deployments for generative LLMs.

Computer Science > Distributed, Parallel, and Cluster Computing

arXiv:2603.08743 (cs)
[Submitted on 1 Mar 2026]

Title:Zipage: Maintain High Request Concurrency for LLM Reasoning through Compressed PagedAttention

View a PDF of the paper titled Zipage: Maintain High Request Concurrency for LLM Reasoning through Compressed PagedAttention, by Mengqi Liao and 8 other authors
View PDF HTML (experimental)
Abstract:With reasoning becoming the generative paradigm for large language models (LLMs), the memory bottleneck caused by KV cache during the decoding phase has become a critical factor limiting high-concurrency service. Although existing KV cache eviction methods address the memory issue, most of them are impractical for industrial-grade applications. This paper introduces Compressed PagedAttention, a method that combines token-wise KV cache eviction with PagedAttention. We propose a comprehensive scheduling strategy and support prefix caching and asynchronous compression for Compressed PagedAttention. Based on this, we have developed a high-concurrency LLM inference engine, Zipage. On large-scale mathematical reasoning tasks, Zipage achieves around 95\% of the performance of Full KV inference engines while delivering over 2.1$\times$ speedup.
Subjects: Distributed, Parallel, and Cluster Computing (cs.DC); Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.08743 [cs.DC]
  (or arXiv:2603.08743v1 [cs.DC] for this version)
  https://doi.org/10.48550/arXiv.2603.08743
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Mengqi Liao [view email]
[v1] Sun, 1 Mar 2026 14:01:36 UTC (1,146 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Zipage: Maintain High Request Concurrency for LLM Reasoning through Compressed PagedAttention, by Mengqi Liao and 8 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.DC
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.