AI Navigate

Abstraction as a Memory-Efficient Inductive Bias for Continual Learning

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Abstraction-Augmented Training (AAT) is a loss-level modification that encourages the model to jointly optimize over concrete instances and their abstract representations to capture latent relational structure across examples.
  • AAT introduces a memory-efficient inductive bias that stabilizes learning in strictly online data streams and eliminates the need for a replay buffer.
  • The authors evaluate AAT on two benchmarks—a controlled relational dataset with entity masking and a narrative dataset with shared proverbs—showing competitive or superior performance to strong experience replay baselines.
  • The approach achieves these gains with zero additional memory and only minimal changes to the training objective, highlighting abstraction as a memory-free alternative to replay-based continual learning.

Abstract

The real world is non-stationary and infinitely complex, requiring intelligent agents to learn continually without the prohibitive cost of retraining from scratch. While online continual learning offers a framework for this setting, learning new information often interferes with previously acquired knowledge, causes forgetting and degraded generalization. To address this, we propose Abstraction-Augmented Training (AAT), a loss-level modification encouraging models to capture the latent relational structure shared across examples. By jointly optimizing over concrete instances and their abstract representations, AAT introduces a memory-efficient inductive bias that stabilizes learning in strictly online data streams, eliminating the need for a replay buffer. To capture the multi-faceted nature of abstraction, we introduce and evaluate AAT on two benchmarks: a controlled relational dataset where abstraction is realized through entity masking, and a narrative dataset where abstraction is expressed through shared proverbs. Our results show that AAT achieves performance comparable to or exceeding strong experience replay (ER) baselines, despite requiring zero additional memory and only minimal changes to the training objective. This work highlights structural abstraction as a powerful, memory-free alternative to ER.