Experience Compression Spectrum: Unifying Memory, Skills, and Rules in LLM Agents

arXiv cs.AI / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that as LLM agents run long-horizon, multi-session tasks, efficiently managing accumulated experience is a major bottleneck for both memory and skill discovery systems.
  • It finds low cross-community citation between memory-focused and skills-focused agent research (<1% across 1,136 references), motivating a unified view.
  • The authors propose the “Experience Compression Spectrum,” positioning agent memory, skills, and rules along one increasing-compression axis to reduce context use, retrieval latency, and compute overhead.
  • By mapping 20+ systems, the paper claims they each operate at fixed predetermined compression levels and lack adaptive cross-level compression, a gap dubbed the “missing diagonal.”
  • It also highlights that specialization alone doesn’t enable solution sharing across communities, that evaluation is tightly coupled to compression level, and that knowledge lifecycle management is largely overlooked.

Abstract

As LLM agents scale to long-horizon, multi-session deployments, efficiently managing accumulated experience becomes a critical bottleneck. Agent memory systems and agent skill discovery both address this challenge -- extracting reusable knowledge from interaction traces -- yet a citation analysis of 1,136 references across 22 primary papers reveals a cross-community citation rate below 1%. We propose the \emph{Experience Compression Spectrum}, a unifying framework that positions memory, skills, and rules as points along a single axis of increasing compression (5--20\times for episodic memory, 50--500\times for procedural skills, 1,000\times+ for declarative rules), directly reducing context consumption, retrieval latency, and compute overhead. Mapping 20+ systems onto this spectrum reveals that every system operates at a fixed, predetermined compression level -- none supports adaptive cross-level compression, a gap we term the \emph{missing diagonal}. We further show that specialization alone is insufficient -- both communities independently solve shared sub-problems without exchanging solutions -- that evaluation methods are tightly coupled to compression levels, that transferability increases with compression at the cost of specificity, and that knowledge lifecycle management remains largely neglected. We articulate open problems and design principles for scalable, full-spectrum agent learning systems.