The Dynamic Gist-Based Memory Model (DGMM): A Memory-Centric Architecture for Artificial Intelligence

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that today’s AI—especially large language models—still struggles with persistent memory, temporal grounding, provenance, and interpretability because experience is often stored implicitly in fixed parameters.
  • It proposes a memory-centric alternative where memory is treated as a first-class, structured substrate for reasoning rather than relying primarily on parameter-centric learning.
  • The Dynamic Gist-Based Memory Model (DGMM) represents experience as an evolving, graph-structured episodic-semantic memory grounded in time, source, and interaction context.
  • DGMM uses cue-conditioned recall to build working memory and provides a formal schema/invariants based on additive memory growth and recall-conditioned interpretation.
  • The reported properties include episodic persistence, cue-localized surprise, and contextual variability without modifying the stored memory structure, aiming to enable interpretable, context-aware, temporally grounded AI without retraining.

Abstract

Contemporary artificial intelligence systems achieve strong performance through large-scale parameterization, retrieval augmentation, and training on extensive static corpora. Despite these advances, they continue to face limitations in persistent memory, temporal grounding, provenance, and interpretability. These challenges are especially pronounced in large language models, where experience is encoded implicitly in fixed parameters, limiting the ability to preserve, inspect, and reinterpret past interactions over time. This paper establishes a memory-centric architectural foundation for artificial intelligence in which experience is represented explicitly and persistently to support temporal grounding, provenance, and interpretability. It proposes an alternative to parameter-centric approaches by treating memory as a first-class, structured substrate for reasoning. We introduce the Dynamic Gist-Based Memory Model (DGMM), an architecture in which experience is represented as an evolving, graph-structured episodic-semantic memory. DGMM encodes experience as interconnected conceptual structures grounded in time, source, and interaction context, and defines selective, cue-conditioned recall as the mechanism for constructing working memory. A formal schema and architectural invariants are provided based on additive memory growth and recall-conditioned interpretation. The results specify properties of DGMM, including episodic persistence, locality of cue-conditioned surprise, and contextual variability without structural modification of stored memory. DGMM provides a coherent architectural theory in which memory is explicit and persistent, supporting evolving interpretation without retraining and enabling interpretable, context-aware, and temporally grounded AI systems.