Maximum Entropy Behavior Exploration for Sim2Real Zero-Shot Reinforcement Learning

arXiv cs.LG / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies online zero-shot reinforcement learning for quadrupedal control on real robots, where a reward-free dataset is collected online to support reward recovery at test time.
  • It finds that undirected exploration can produce low-diversity data, which degrades downstream performance and can prevent practical direct hardware deployment.
  • To address this, the authors propose FB-MEBE, which combines the Forward-Backward (FB) approach with maximum-entropy-based exploration to maximize the entropy of the achieved behavior distribution.
  • The method also adds a regularization critic to steer recovered policies toward more natural and physically plausible behaviors, improving transfer to real hardware.
  • Experiments show FB-MEBE improves performance across simulated downstream tasks and produces natural policies that can be deployed to hardware without additional fine-tuning.

Abstract

Zero-shot reinforcement learning (RL) algorithms aim to learn a family of policies from a reward-free dataset, and recover optimal policies for any reward function directly at test time. Naturally, the quality of the pretraining dataset determines the performance of the recovered policies across tasks. However, pre-collecting a relevant, diverse dataset without prior knowledge of the downstream tasks of interest remains a challenge. In this work, we study \textit{online} zero-shot RL for quadrupedal control on real robotic systems, building upon the Forward-Backward (FB) algorithm. We observe that undirected exploration yields low-diversity data, leading to poor downstream performance and rendering policies impractical for direct hardware deployment. Therefore, we introduce FB-MEBE, an online zero-shot RL algorithm that combines an unsupervised behavior exploration strategy with a regularization critic. FB-MEBE promotes exploration by maximizing the entropy of the achieved behavior distribution. Additionally, a regularization critic shapes the recovered policies toward more natural and physically plausible behaviors. We empirically demonstrate that FB-MEBE achieves and improved performance compared to other exploration strategies in a range of simulated downstream tasks, and that it renders natural policies that can be seamlessly deployed to hardware without further finetuning. Videos and code available on our website.
広告