AI Navigate

欠落しているメモリ階層:LLMコンテキストウィンドウのための要求ページング

arXiv cs.AI / 2026/3/11

Developer Stack & InfrastructureIdeas & Deep AnalysisModels & Research

要点

  • 本論文は、大規模言語モデル(LLM)のコンテキストウィンドウが完全なメモリシステムではなく、小さく高速なL1キャッシュのように機能していることを指摘している。L2や仮想メモリ、ページングが存在しないため、セッションにおいて21.8%もの構造的無駄が生じるなど、大きな非効率性が発生している。
  • 著者らはPichayという、LLMコンテキストウィンドウ向けの要求ページングシステムを導入した。これはクライアントと推論APIの間に透明なプロキシとして実装され、コンテキスト消費を劇的に削減しつつ、運用上の安定性を維持するために、エビクションやフォールトに基づくページ固定を管理する。
  • Pichayの本格的な運用展開では、コンテキストメモリ使用を最大93%削減し、極めて低いフォールト率を達成。これにより、LLMコンテキスト管理における要求ページングやメモリ階層の概念の有効性が証明された。
  • 本研究は、コンテキストサイズ制限、アテンションの劣化、セッション間の状態喪失など、LLMにおける多くの持続的な課題が古典的な仮想メモリ問題に類似しており、確立されたメモリ階層ソリューションの採用によって解決可能であることを示唆している。
  • 論文はLLM向けの多層メモリ階層の構成を概説し、実際に生産環境で最初の3レベルを展開。セッション間メモリ管理を次なる重要な研究開発のフロンティアとして強調している。

Computer Science > Operating Systems

arXiv:2603.09023 (cs)
[Submitted on 9 Mar 2026]

Title:The Missing Memory Hierarchy: Demand Paging for LLM Context Windows

Authors:Tony Mason
View a PDF of the paper titled The Missing Memory Hierarchy: Demand Paging for LLM Context Windows, by Tony Mason
View PDF HTML (experimental)
Abstract:The context window of a large language model is not memory. It is L1 cache: a small, fast, expensive resource that the field treats as the entire memory system. There is no L2, no virtual memory, no paging. Every tool definition, every system prompt, and every stale tool result occupies context for the lifetime of the session. The result is measurable: across 857 production sessions and 4.45 million effective input tokens, 21.8% is structural waste.
We present Pichay, a demand paging system for LLM context windows. Implemented as a transparent proxy between client and inference API, Pichay interposes on the message stream to evict stale content, detect page faults when the model re-requests evicted material, and pin working-set pages identified by fault history. In offline replay across 1.4 million simulated evictions, the fault rate is 0.0254%. In live production deployment over 681turns, the system reduces context consumption by up to 93% (5,038KB to 339KB); under extreme sustained pressure, the system remains operational but exhibits the expected thrashing pathology, with repeated fault-in of evicted content.
The key observation is that the problems the field faces, such as context limits, attention degradation, cost scaling, lost state across sessions, are virtual memory problems wearing different clothes. The solutions exist: working set theory (Denning, 1968), demand paging, fault-driven replacement policies, and memory hierarchies with multiple eviction-managed levels. We describe the architecture of a full memory hierarchy for LLM systems (L1 through persistent storage), report on the first three levels deployed in production use (L1 eviction, L2 fault-driven pinning, L3 model-initiated conversation compaction), and identify cross-session memory as the remaining frontier.
Subjects: Operating Systems (cs.OS); Artificial Intelligence (cs.AI); Software Engineering (cs.SE)
Cite as: arXiv:2603.09023 [cs.OS]
  (or arXiv:2603.09023v1 [cs.OS] for this version)
  https://doi.org/10.48550/arXiv.2603.09023
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Tony Mason [view email]
[v1] Mon, 9 Mar 2026 23:38:32 UTC (51 KB)
Full-text links:

Access Paper:

Current browse context:
cs.OS
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.