AI Navigate

The Missing Memory Hierarchy: Demand Paging for LLM Context Windows

arXiv cs.AI / 3/11/2026

Developer Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that the large language model context window functions like a small, fast L1 cache rather than a full memory system, lacking L2, virtual memory, and paging, leading to significant inefficiencies such as 21.8% structural waste in sessions.
  • The authors introduce Pichay, a demand paging system for LLM context windows, which transparently manages evictions and fault-driven page pinning to reduce context consumption drastically while maintaining operational stability.
  • Pichay's production deployment demonstrated up to 93% reduction in context memory use and extremely low fault rates, proving the effectiveness of demand paging and memory hierarchy concepts for LLM context management.
  • The study suggests that many persistent challenges in LLMs (e.g., context size limits, attention degradation, session state loss) are analogous to classic virtual memory problems and can be addressed by adopting established memory hierarchy solutions.
  • The paper outlines a multi-level memory hierarchy for LLMs, successfully deploying the first three levels in production and highlighting cross-session memory management as an important next frontier for research and development.

Computer Science > Operating Systems

arXiv:2603.09023 (cs)
[Submitted on 9 Mar 2026]

Title:The Missing Memory Hierarchy: Demand Paging for LLM Context Windows

Authors:Tony Mason
View a PDF of the paper titled The Missing Memory Hierarchy: Demand Paging for LLM Context Windows, by Tony Mason
View PDF HTML (experimental)
Abstract:The context window of a large language model is not memory. It is L1 cache: a small, fast, expensive resource that the field treats as the entire memory system. There is no L2, no virtual memory, no paging. Every tool definition, every system prompt, and every stale tool result occupies context for the lifetime of the session. The result is measurable: across 857 production sessions and 4.45 million effective input tokens, 21.8% is structural waste.
We present Pichay, a demand paging system for LLM context windows. Implemented as a transparent proxy between client and inference API, Pichay interposes on the message stream to evict stale content, detect page faults when the model re-requests evicted material, and pin working-set pages identified by fault history. In offline replay across 1.4 million simulated evictions, the fault rate is 0.0254%. In live production deployment over 681turns, the system reduces context consumption by up to 93% (5,038KB to 339KB); under extreme sustained pressure, the system remains operational but exhibits the expected thrashing pathology, with repeated fault-in of evicted content.
The key observation is that the problems the field faces, such as context limits, attention degradation, cost scaling, lost state across sessions, are virtual memory problems wearing different clothes. The solutions exist: working set theory (Denning, 1968), demand paging, fault-driven replacement policies, and memory hierarchies with multiple eviction-managed levels. We describe the architecture of a full memory hierarchy for LLM systems (L1 through persistent storage), report on the first three levels deployed in production use (L1 eviction, L2 fault-driven pinning, L3 model-initiated conversation compaction), and identify cross-session memory as the remaining frontier.
Subjects: Operating Systems (cs.OS); Artificial Intelligence (cs.AI); Software Engineering (cs.SE)
Cite as: arXiv:2603.09023 [cs.OS]
  (or arXiv:2603.09023v1 [cs.OS] for this version)
  https://doi.org/10.48550/arXiv.2603.09023
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Tony Mason [view email]
[v1] Mon, 9 Mar 2026 23:38:32 UTC (51 KB)
Full-text links:

Access Paper:

Current browse context:
cs.OS
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.