Improving Sparse Memory Finetuning

arXiv cs.LG / 4/8/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper addresses continual adaptation for LLMs without catastrophic forgetting by localizing learning updates to a small subset of parameters rather than modifying shared dense weights.
  • It proposes an open-source Sparse Memory Finetuning (SMF) pipeline that retrofits a pretrained model (Qwen-2.5-0.5B) with explicit sparse memory modules to support continual learning.
  • The authors introduce a theoretically motivated slot-selection mechanism using KL divergence to target memory updates for “surprising” tokens versus a background distribution.
  • Experiments show the retrofitted models can learn new factual knowledge while maintaining held-out capabilities with minimal forgetting, supporting the sparse-update approach as practical and effective.
  • The method is positioned as feasible on consumer hardware, lowering barriers to deploying continual learning in real-world settings.

Abstract

Large Language Models (LLMs) are typically static after training, yet real-world applications require continual adaptation to new knowledge without degrading existing capabilities. Standard approaches to updating models, like full finetuning or parameter-efficient methods (e.g., LoRA), face a fundamental trade-off: catastrophic forgetting. They modify shared dense representations, causing interference across tasks. Sparse Memory Finetuning (SMF) offers a promising alternative by localizing updates to a small subset of parameters in explicit memory layers. In this work, we present an open-source pipeline to retrofit existing pretrained models (Qwen-2.5-0.5B) with sparse memory modules, enabling effective continual learning on consumer hardware. We extend prior work by introducing a theoretically grounded slot-selection mechanism based on Kullback-Leibler (KL) divergence, which prioritizes memory updates for informationally "surprising" tokens relative to a background distribution. Our experiments demonstrate that our retrofitted models can acquire new factual knowledge with minimal forgetting of held-out capabilities, validating the sparse update hypothesis in a practical setting.