AI Navigate

SuperLocalMemory V3: Information-Geometric Foundations for Zero-LLM Enterprise Agent Memory

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper establishes information-geometric foundations for AI agent memory, introducing a Fisher-information-based retrieval metric that is computable in O(d) time and invariant under sufficient statistics.
  • It models memory lifecycle with Riemannian Langevin dynamics, proving existence and uniqueness of the stationary distribution via the Fokker-Planck equation, replacing heuristic decay with principled convergence guarantees.
  • It proposes a cellular sheaf model in which non-trivial first cohomology classes correspond to irreconcilable contradictions across memory contexts.
  • On the LoCoMo benchmark, the approach achieves +12.7 percentage points over engineering baselines across six conversations and up to +19.9 percentage points on the hardest dialogues, with a four-channel retrieval architecture reaching 75% accuracy without cloud and 87.7% with cloud augmentation, and a zero-LLM configuration that satisfies EU AI Act data sovereignty by design.

Abstract

Persistent memory is a central capability for AI agents, yet the mathematical foundations of memory retrieval, lifecycle management, and consistency remain unexplored. Current systems employ cosine similarity for retrieval, heuristic decay for salience, and provide no formal contradiction detection. We establish information-geometric foundations through three contributions. First, a retrieval metric derived from the Fisher information structure of diagonal Gaussian families, satisfying Riemannian metric axioms, invariant under sufficient statistics, and computable in O(d) time. Second, memory lifecycle formulated as Riemannian Langevin dynamics with proven existence and uniqueness of the stationary distribution via the Fokker-Planck equation, replacing hand-tuned decay with principled convergence guarantees. Third, a cellular sheaf model where non-trivial first cohomology classes correspond precisely to irreconcilable contradictions across memory contexts. On the LoCoMo benchmark, the mathematical layers yield +12.7 percentage points over engineering baselines across six conversations, reaching +19.9 pp on the most challenging dialogues. A four-channel retrieval architecture achieves 75% accuracy without cloud dependency. Cloud-augmented results reach 87.7%. A zero-LLM configuration satisfies EU AI Act data sovereignty requirements by architectural design. To our knowledge, this is the first work establishing information-geometric, sheaf-theoretic, and stochastic-dynamical foundations for AI agent memory systems.