TSN-Affinity: Similarity-Driven Parameter Reuse for Continual Offline Reinforcement Learning
arXiv cs.LG / 4/29/2026
📰 NewsModels & Research
Key Points
- The paper introduces TSN-Affinity, a method for continual offline reinforcement learning (CORL) that learns multiple tasks from time-evolving datasets while preserving performance on earlier tasks.
- TSN-Affinity combines TinySubNetworks with a Decision Transformer and uses an RL-aware routing strategy that shares parameters based on action compatibility and latent similarity between tasks.
- The approach targets key weaknesses of replay-based CORL, including memory overhead and distribution mismatch between replayed data and newly learned policies.
- Experiments on Atari and Franka Emika Panda manipulation simulations (covering both discrete and continuous control) show strong retention using sparse SubNetworks, with further routing improving multi-task performance.
- The authors conclude that similarity-guided architectural parameter reuse can be a viable alternative to replay-based strategies in CORL.
Related Articles

How I Use AI Agents to Maintain a Living Knowledge Base for My Team
Dev.to
IK_LLAMA now supports Qwen3.5 MTP Support :O
Reddit r/LocalLLaMA
OpenAI models, Codex, and Managed Agents come to AWS
Dev.to

Automatic Error Recovery in AI Agent Networks
Dev.to
AeroJAX: JAX-native CFD, differentiable end-to-end. ~560 FPS at 128x128 on CPU [P]
Reddit r/MachineLearning