Chasing Autonomy: Dynamic Retargeting and Control Guided RL for Performant and Controllable Humanoid Running

arXiv cs.RO / 3/30/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a pipeline that dynamically retargets a humanoid’s motion using an optimization routine with hard constraints, producing periodic reference libraries from a single human demonstration.
  • It finds that both the choice of reference motion and the reward design significantly affect velocity tracking, with a goal-conditioned, control-guided reward that follows dynamically optimized human data delivering the best performance.
  • The authors deploy the resulting RL policy on a Unitree G1 robot, demonstrating fast and endurance-oriented running up to 3.3 m/s and hundreds of meters of real-world traversal.
  • The work also showcases locomotion controllability by integrating the controller into a perception-and-planning autonomy stack for outdoor obstacle avoidance while running.

Abstract

Humanoid robots have the promise of locomoting like humans, including fast and dynamic running. Recently, reinforcement learning (RL) controllers that can mimic human motions have become popular as they can generate very dynamic behaviors, but they are often restricted to single motion play-back which hinders their deployment in long duration and autonomous locomotion. In this paper, we present a pipeline to dynamically retarget human motions through an optimization routine with hard constraints to generate improved periodic reference libraries from a single human demonstration. We then study the effect of both the reference motion and the reward structure on the reference and commanded velocity tracking, concluding that a goal-conditioned and control-guided reward which tracks dynamically optimized human data results in the best performance. We deploy the policy on hardware, demonstrating its speed and endurance by achieving running speeds of up to 3.3 m/s on a Unitree G1 robot and traversing hundreds of meters in real-world environments. Additionally, to demonstrate the controllability of the locomotion, we use the controller in a full perception and planning autonomy stack for obstacle avoidance while running outdoors.