NORACL: Neurogenesis for Oracle-free Resource-Adaptive Continual Learning

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies the stability–plasticity dilemma in continual learning as an architectural issue caused by fixed-capacity networks that cannot know future task count and feature overlap.
  • It argues that regularization-based continual learning methods implicitly depend on an “oracle” architecture sized for the unknown future, leading to over-provisioning or eventual loss of plasticity depending on task relationships.
  • NORACL is proposed as a neurogenesis-inspired approach that starts from a compact model and grows neurons only when representational and plasticity saturation signals indicate it is needed.
  • Experiments compare NORACL with oracle-sized static baselines across different numbers of tasks and task geometry, showing equal or better final accuracy while using fewer parameters.
  • The authors provide interpretability and analysis, showing that growth is structured so dissimilar tasks expand earlier feature-extraction layers while overlapping tasks shift growth toward later feature-combination layers, thereby creating fresh capacity for new tasks.

Abstract

In a continual learning setting, we require a model to be plastic enough to learn a new task and stable enough to not disturb previously learned capabilities. We argue that this dilemma has an architectural root. A finite network has limited representational and plastic resources, yet the required capacity depends on properties of the future task stream that are unknown: how many tasks will be encountered, and how much they overlap in feature space. Regularization-based methods preserve past knowledge within fixed-capacity architectures and therefore implicitly rely on an oracle architecture sized for this unknown future. When tasks are only weakly related, fixed architectures progressively run out of plastic resources; when tasks are few or strongly overlapping, models are often over-provisioned. Inspired by neurogenesis in biology, we propose NORACL to address the stability-plasticity dilemma by tackling the oracle architecture problem through neuronal growth. Starting from a compact network, NORACL grows only when needed by monitoring two complementary signals for representational and plasticity saturation. We evaluate NORACL against oracle-sized static baselines across varying task counts and geometries. Across all settings, NORACL achieves final average accuracies that are better than or on par with oracle-provisioned static baselines while using fewer parameters. Additionally, NORACL yields architectures with interpretable growth, i.e. dissimilar tasks predominantly expand feature-extraction layers, whereas tasks which rely on common features shift growth toward later feature-combination layers. Our analysis further explains why fixed-capacity networks lose plasticity as tasks accumulate, whereas NORACL creates fresh capacity for new tasks through growth. Together, these results show that adaptive neurogenesis pushes the stability-plasticity Pareto frontier of continual learning.