Abstract
Understanding visual inputs for a given task amidst varied changes is a key challenge posed by visual reinforcement learning agents. We propose \textit{Value Explicit Pretraining} (VEP), a method that learns generalizable representations for transfer reinforcement learning. VEP enables efficient learning of new tasks that share similar objectives as previously learned tasks, by learning an encoder that trains representations to be invariant to changes in environment dynamics and appearance. To pretrain the encoder with \textit{suboptimal unlabeled demonstration data} (sequence of observations and sparse reward signals), we use a self-supervised contrastive loss that enables the model to relate states across different tasks based on the Monte Carlo value estimate that is reflective of task progress, resulting in temporally smooth representations that capture the objective of the task. A major difference between our method and the existing approaches is the use of suboptimal unlabeled data that do not always solve the task. Experiments on Ant locomotion, a realistic navigation simulator and the Atari benchmark show that VEP outperforms current SoTA pretraining methods on the ability to generalize to unseen tasks. VEP achieves up to 2\times improvement in rewards, and up to 3\times improvement in sample efficiency. For videos of VEP policies, visit our \href{https://sites.google.com/view/value-explicit-pretraining/}{website}.