Beyond Single-Model Optimization: Preserving Plasticity in Continual Reinforcement Learning

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Continual reinforcement learning often over-relies on “single-model preservation,” which can fail because a retained policy may become a poor starting point after interference, reflecting a loss of plasticity.
  • The paper introduces TeLAPA (Transfer-Enabled Latent-Aligned Policy Archives), which stores behaviorally diverse policy neighborhoods per task and uses a shared latent space so archived policies remain comparable and reusable under non-stationary changes.
  • TeLAPA’s key shift is from preserving isolated policies to maintaining skill-aligned neighborhoods—competent, behaviorally related alternatives that better support future relearning.
  • In a MiniGrid continual learning setup, TeLAPA improves the number of tasks learned, speeds up recovery on revisited tasks after interference, and maintains higher performance over task sequences.
  • The authors find that source-optimal policies are frequently not transfer-optimal even locally, and that effective reuse requires retaining and selecting among multiple nearby options rather than collapsing them into one representative policy.

Abstract

Continual reinforcement learning must balance retention with adaptation, yet many methods still rely on \emph{single-model preservation}, committing to one evolving policy as the main reusable solution across tasks. Even when a previously successful policy is retained, it may no longer provide a reliable starting point for rapid adaptation after interference, reflecting a form of \emph{loss of plasticity} that single-policy preservation cannot address. Inspired by quality-diversity methods, we introduce \textsc{TeLAPA} (Transfer-Enabled Latent-Aligned Policy Archives), a continual RL framework that organizes behaviorally diverse policy neighborhoods into per-task archives and maintains a shared latent space so that archived policies remain comparable and reusable under non-stationary drift. This perspective shifts continual RL from retaining isolated solutions to maintaining \emph{skill-aligned neighborhoods} with competent and behaviorally related policies that support future relearning. In our MiniGrid CL setting, \textsc{TeLAPA} learns more tasks successfully, recovers competence faster on revisited tasks after interference, and retains higher performance across a sequence of tasks. Our analyses show that source-optimal policies are often not transfer-optimal, even within a local competent neighborhood, and that effective reuse depends on retaining and selecting among multiple nearby alternatives rather than collapsing them to one representative. Together, these results reframe continual RL around reusable and competent policy neighborhoods, providing a route beyond single-model preservation toward more plastic lifelong agents.