TSN-Affinity: Similarity-Driven Parameter Reuse for Continual Offline Reinforcement Learning

arXiv cs.LG / 4/29/2026

📰 NewsModels & Research

Key Points

  • The paper introduces TSN-Affinity, a method for continual offline reinforcement learning (CORL) that learns multiple tasks from time-evolving datasets while preserving performance on earlier tasks.
  • TSN-Affinity combines TinySubNetworks with a Decision Transformer and uses an RL-aware routing strategy that shares parameters based on action compatibility and latent similarity between tasks.
  • The approach targets key weaknesses of replay-based CORL, including memory overhead and distribution mismatch between replayed data and newly learned policies.
  • Experiments on Atari and Franka Emika Panda manipulation simulations (covering both discrete and continuous control) show strong retention using sparse SubNetworks, with further routing improving multi-task performance.
  • The authors conclude that similarity-guided architectural parameter reuse can be a viable alternative to replay-based strategies in CORL.

Abstract

Continual offline reinforcement learning (CORL) aims to learn a sequence of tasks from datasets collected over time while preserving performance on previously learned tasks. This setting corresponds to domains where new tasks arise over time, but adapting the model in live environment interactions is expensive, risky, or impossible. However, CORL inherits the dual difficulty of offline reinforcement learning and adapting while preventing catastrophic forgetting. Replay-based continual learning approaches remain a strong baseline but incur memory overhead and suffer from a distribution mismatch between replayed samples and newly learned policies. At the same time, architectural continual learning methods have shown strong potential in supervised learning but remain underexplored in CORL. In this work, we propose TSN-Affinity, a novel CORL method based on TinySubNetworks and Decision Transformer. The method enables task-specific parameterization and controlled knowledge sharing through a RL-aware reuse strategy that routes tasks according to action compatibility and latent similarity. We evaluate the approach on benchmarks based on Atari games and simulations of manipulation tasks with the Franka Emika Panda robotic arm, covering both discrete and continuous control. Results show strong retention from sparse SubNetworks, with routing further improving multi-task performance. Our findings suggest that similarity-guided architectural reuse is a strong and viable alternative to replay-based strategies in a CORL setting. Our code is available at: https://github.com/anonymized-for-submission123/tsn-affinity.