Continual Learning as Shared-Manifold Continuation Under Compatible Shift

arXiv cs.LG / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes continual learning as continuation of a shared latent manifold, introducing SPMA as a geometry-aware approach to preserve old representations while updating models.
  • It presents SPMA-OG, a geometry-preserving variant that combines sparse replay, output distillation, relational geometry preservation, local smoothing, and chart-assignment regularization on old anchors.
  • Experiments on compatible-shift CIFAR10 and Tiny-ImageNet show SPMA-OG improves old-task retention and representation preservation while remaining competitive on new-task accuracy.
  • A controlled atlas-manifold benchmark demonstrates near-perfect anchor-geometry preservation and improved new-task accuracy over replay, supporting the usefulness of geometry-aware anchor regularization for shared latent representations.

Abstract

Continual learning methods usually preserve old behavior by regularizing parameters, matching old outputs, or replaying previous examples. These strategies can reduce forgetting, but they do not directly specify how the latent representation should evolve. We study a narrower geometric alternative for the regime where old and new data should remain on the same latent support: continual learning as continuation of a shared manifold. We instantiate this view within Support-Preserving Manifold Assimilation (SPMA) and evaluate a geometry-preserving variant, SPMA-OG, that combines sparse replay, output distillation, relational geometry preservation, local smoothing, and chart-assignment regularization on old anchors. On representative compatible-shift CIFAR10 and Tiny-ImageNet runs, SPMA-OG improves over sparse replay baselines in old-task retention and representation-preservation metrics while remaining competitive on new-task accuracy. On a controlled synthetic atlas-manifold benchmark, it achieves near-perfect anchor-geometry preservation while also improving new-task accuracy over replay. These results provide evidence that geometry-aware anchor regularization is a useful inductive bias when continual learning should preserve a shared latent support rather than create a new one.