Learning Task-Invariant Properties via Dreamer: Enabling Efficient Policy Transfer for Quadruped Robots

arXiv cs.RO / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces DreamTIP, a Dreamer-based framework for quadruped locomotion that learns Task-Invariant Properties to improve sim-to-real transfer across diverse, dynamic terrains.
  • It uses auxiliary prediction targets in the world model (e.g., contact stability and terrain clearance) to produce policy representations that are less sensitive to underlying dynamics changes.
  • An efficient real-world adaptation strategy combines a mixed replay buffer with regularization constraints to reduce representation collapse and catastrophic forgetting during calibration.
  • Experiments on complex terrain scenarios (Stair, Climb, Tilt, Crawl) show substantial gains, including an average +28.1% improvement across eight simulated transfer tasks.
  • Real-world results report a dramatic jump on the Climb task, from a 10% success rate for the baseline to 100% with DreamTIP.

Abstract

Achieving quadruped robot locomotion across diverse and dynamic terrains presents significant challenges, primarily due to the discrepancies between simulation environments and real-world conditions. Traditional sim-to-real transfer methods often rely on manual feature design or costly real-world fine-tuning. To address these limitations, this paper proposes the DreamTIP framework, which incorporates Task-Invariant Properties learning within the Dreamer world model architecture to enhance sim-to-real transfer capabilities. Guided by large language models, DreamTIP identifies and leverages Task-Invariant Properties, such as contact stability and terrain clearance, which exhibit robustness to dynamic variations and strong transferability across tasks. These properties are integrated into the world model as auxiliary prediction targets, enabling the policy to learn representations that are insensitive to underlying dynamic changes. Furthermore, an efficient adaptation strategy is designed, employing a mixed replay buffer and regularization constraints to rapidly calibrate to real-world dynamics while effectively mitigating representation collapse and catastrophic forgetting. Extensive experiments on complex terrains, including Stair, Climb, Tilt, and Crawl, demonstrate that DreamTIP significantly outperforms state-of-the-art baselines in both simulated and real-world environments. Our method achieves an average performance improvement of 28.1% across eight distinct simulated transfer tasks. In the real-world Climb task, the baseline method achieved only a 10\ success rate, whereas our method attained a 100% success rate. These results indicate that incorporating Task-Invariant Properties into Dreamer learning offers a novel solution for achieving robust and transferable robot locomotion.