AI Navigate

Learning to Forget: Sleep-Inspired Memory Consolidation for Resolving Proactive Interference in Large Language Models

arXiv cs.AI / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • SleepGate proposes a sleep-like inference cycle to mitigate proactive interference by managing the KV cache in transformer-based LLMs.
  • It introduces three components: a conflict-aware temporal tagger, a lightweight forgetting gate, and a consolidation module that evict/compress stale entries and summarize the remaining ones.
  • Inference sleep cycles are governed by an adaptive entropy-based trigger and trained with a dual-phase objective for wake-language modeling and post-consolidation retrieval.
  • Theoretical analysis shows the interference horizon reduces from O(n) to O(log n), improving retrieval as the context length grows.
  • Empirical results on a small 4-layer transformer (793K parameters) show 99.5% retrieval accuracy at PI depth 5 and 97.0% at depth 10, outperforming baselines and suggesting an architecture-level solution beyond prompt engineering.

Abstract

Large language models (LLMs) suffer from proactive interference (PI): outdated information in the context window disrupts retrieval of current values. This interference degrades retrieval accuracy log-linearly as stale associations accumulate, a bottleneck that persists regardless of context length and resists prompt-engineering mitigations. Biological brains resolve an analogous challenge through sleep-dependent memory consolidation: synaptic downscaling, selective replay, and targeted forgetting. We propose SleepGate, a biologically inspired framework that augments transformer-based LLMs with a learned sleep cycle over the key-value (KV) cache. SleepGate introduces three mechanisms: (1) a conflict-aware temporal tagger detecting when new entries supersede old ones; (2) a lightweight forgetting gate trained to selectively evict or compress stale cache entries; and (3) a consolidation module that merges surviving entries into compact summaries. These components activate periodically during inference in sleep micro-cycles, governed by an adaptive entropy-based trigger. We formalize a dual-phase training objective jointly optimizing language modeling during the wake phase and post-consolidation retrieval during the sleep phase. Theoretical analysis shows SleepGate reduces the interference horizon from O(n) to O(log n). In experiments with a small-scale transformer (4 layers, 793K parameters), SleepGate achieves 99.5% retrieval accuracy at PI depth 5 and 97.0% at depth 10, while all five baselines -- full KV cache, sliding window, H2O, StreamingLLM, and decay-only ablation -- remain below 18%. Our framework offers an architecture-level solution that prompt engineering cannot address.