Learning to Forget: Continual Learning with Adaptive Weight Decay

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses continual learning under finite capacity by proposing controlled forgetting to free up model capacity for new knowledge.
  • It argues that standard weight decay acts as uniform forgetting, which can be inefficient because different parameters may encode stable knowledge versus rapidly changing targets.
  • The authors introduce FADE (Forgetting through Adaptive Decay), which adapts weight-decay rates per parameter online using approximate meta-gradient descent.
  • They derive FADE for an online linear setting and test it by applying the method to the final layer of neural networks.
  • Experiments show FADE learns distinct decay rates automatically, works well alongside step-size adaptation, and improves performance over fixed weight decay on online tracking and streaming classification tasks.

Abstract

Continual learning agents with finite capacity must balance acquiring new knowledge with retaining the old. This requires controlled forgetting of knowledge that is no longer needed, freeing up capacity to learn. Weight decay, viewed as a mechanism for forgetting, can serve this role by gradually discarding information stored in the weights. However, a fixed scalar weight decay drives this forgetting uniformly over time and uniformly across all parameters, even when some encode stable knowledge while others track rapidly changing targets. We introduce Forgetting through Adaptive Decay (FADE), which adapts per-parameter weight decay rates online via approximate meta-gradient descent. We derive FADE for the online linear setting and apply it to the final layer of neural networks. Our empirical analysis shows that FADE automatically discovers distinct decay rates for different parameters, complements step-size adaptation, and consistently improves over fixed weight decay across online tracking and streaming classification problems.