Task Switching Without Forgetting via Proximal Decoupling

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles continual learning’s core problem—learning new tasks without forgetting old ones—by arguing that typical regularization-based methods overly couple learning and retention in a single gradient update.
  • It proposes “proximal decoupling,” using operator splitting to separate the optimization into a task-loss learning step and a proximal stability step with a sparse regularizer that prunes unnecessary parameters.
  • By treating stability-plasticity as a negotiated update between two complementary operators (rather than a conflicting gradient), the method aims to avoid over-constraining the model as task sequences get longer.
  • The authors provide theoretical justification for the splitting approach and report state-of-the-art performance on standard continual-learning benchmarks, improving both stability and adaptability.
  • The approach reportedly avoids additional components such as replay buffers, Bayesian sampling, or meta-learning, while still delivering stronger results.

Abstract

In continual learning, the primary challenge is to learn new information without forgetting old knowledge. A common solution addresses this trade-off through regularization, penalizing changes to parameters critical for previous tasks. In most cases, this regularization term is directly added to the training loss and optimized with standard gradient descent, which blends learning and retention signals into a single update and does not explicitly separate essential parameters from redundant ones. As task sequences grow, this coupling can over-constrain the model, limiting forward transfer and leading to inefficient use of capacity. We propose a different approach that separates task learning from stability enforcement via operator splitting. The learning step focuses on minimizing the current task loss, while a proximal stability step applies a sparse regularizer to prune unnecessary parameters and preserve task-relevant ones. This turns the stability-plasticity into a negotiated update between two complementary operators, rather than a conflicting gradient. We provide theoretical justification for the splitting method on the continual-learning objective, and demonstrate that our proposed solver achieves state-of-the-art results on standard benchmarks, improving both stability and adaptability without the need for replay buffers, Bayesian sampling, or meta-learning components.