Context Cartography: Toward Structured Governance of Contextual Space in Large Language Model Systems
arXiv cs.AI / 2026/3/24
💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper argues that simply increasing LLM context window size is not sufficient because transformer “contextual space” has structural gradients, salience asymmetries, and degradation effects over long distances (e.g., “lost in the middle”).
- It proposes “Context Cartography,” a formal governance framework that partitions informational context into three zones—black fog (unobserved), gray fog (stored memory), and visible field (active reasoning surface)—and defines seven operators to manage transitions across and within these zones.
- The seven cartographic operators (reconnaissance, selection, simplification, aggregation, projection, displacement, and layering) are organized by transformation type and zone scope, with derivations based on coverage analysis of non-trivial zone transformations.
- The framework is grounded in transformer attention salience geometry, framing the operators as compensations for issues like linear prefix memory, append-only state, and entropy accumulation as context grows.
- Using analysis of four contemporary systems (Claude Code, Letta, MemOS, and OpenViking), the authors claim independent convergence of these operators across the industry and provide testable, benchmarkable ablation/diagnostic predictions.

