Value Explicit Pretraining for Learning Transferable Representations

arXiv cs.RO / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Value Explicit Pretraining (VEP), a method for learning transferable visual representations for reinforcement learning agents across varied environment changes.
  • VEP pretrains an encoder using suboptimal, unlabeled demonstration data by applying a self-supervised contrastive loss grounded in Monte Carlo value estimates tied to task progress.
  • The approach aims to produce temporally smooth representations that remain invariant to changes in both environment dynamics and appearance.
  • Experiments on Ant locomotion, a navigation simulator, and Atari show VEP improves generalization to unseen tasks, reaching up to 2× higher rewards and up to 3× better sample efficiency than prior state-of-the-art pretraining methods.
  • The key differentiator versus existing approaches is explicitly leveraging suboptimal unlabeled demonstrations that may not always solve the task during pretraining.

Abstract

Understanding visual inputs for a given task amidst varied changes is a key challenge posed by visual reinforcement learning agents. We propose \textit{Value Explicit Pretraining} (VEP), a method that learns generalizable representations for transfer reinforcement learning. VEP enables efficient learning of new tasks that share similar objectives as previously learned tasks, by learning an encoder that trains representations to be invariant to changes in environment dynamics and appearance. To pretrain the encoder with \textit{suboptimal unlabeled demonstration data} (sequence of observations and sparse reward signals), we use a self-supervised contrastive loss that enables the model to relate states across different tasks based on the Monte Carlo value estimate that is reflective of task progress, resulting in temporally smooth representations that capture the objective of the task. A major difference between our method and the existing approaches is the use of suboptimal unlabeled data that do not always solve the task. Experiments on Ant locomotion, a realistic navigation simulator and the Atari benchmark show that VEP outperforms current SoTA pretraining methods on the ability to generalize to unseen tasks. VEP achieves up to 2\times improvement in rewards, and up to 3\times improvement in sample efficiency. For videos of VEP policies, visit our \href{https://sites.google.com/view/value-explicit-pretraining/}{website}.