Task-specific Subnetwork Discovery in Reinforcement Learning for Autonomous Underwater Navigation

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an analysis of how a pretrained multi-task reinforcement learning (RL) policy network behaves internally in an autonomous underwater navigation setting.
  • Using the HoloOcean simulator, it identifies and compares task-specific subnetworks responsible for navigating toward different underwater targets (species), aiming to improve interpretability.
  • Results show that in a contextual multi-task RL setup with related tasks, the network differentiates tasks using only about 1.5% of its weights, suggesting strong parameter sharing.
  • Of the task-differentiating weights, around 85% link context-variable nodes in the input layer to the next hidden layer, emphasizing the central role of context variables.
  • The authors argue the findings can support safer real-world deployment by clarifying shared vs. specialized components, and can enable more efficient model editing, transfer learning, and continual learning.

Abstract

Autonomous underwater vehicles are required to perform multiple tasks adaptively and in an explainable manner under dynamic, uncertain conditions and limited sensing, challenges that classical controllers struggle to address. This demands robust, generalizable, and inherently interpretable control policies for reliable long-term monitoring. Reinforcement learning, particularly multi-task RL, overcomes these limitations by leveraging shared representations to enable efficient adaptation across tasks and environments. However, while such policies show promising results in simulation and controlled experiments, they yet remain opaque and offer limited insight into the agent's internal decision-making, creating gaps in transparency, trust, and safety that hinder real-world deployment. The internal policy structure and task-specific specialization remain poorly understood. To address these gaps, we analyze the internal structure of a pretrained multi-task reinforcement learning network in the HoloOcean simulator for underwater navigation by identifying and comparing task-specific subnetworks responsible for navigating toward different species. We find that in a contextual multi-task reinforcement learning setting with related tasks, the network uses only about 1.5% of its weights to differentiate between tasks. Of these, approximately 85% connect the context-variable nodes in the input layer to the next hidden layer, highlighting the importance of context variables in such settings. Our approach provides insights into shared and specialized network components, useful for efficient model editing, transfer learning, and continual learning for underwater monitoring through a contextual multi-task reinforcement learning method.