Learning Whole-Body Humanoid Locomotion via Motion Generation and Motion Tracking
arXiv cs.RO / 4/21/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper tackles whole-body humanoid locomotion, which is difficult due to high-dimensional control, morphological instability, and the need to adapt in real time to changing terrain using onboard perception.
- It introduces a framework that pairs a diffusion-based motion generation model (trained on retargeted human motions) with an RL-trained whole-body motion tracker to produce terrain-aware reference motions.
- To handle imperfect or noisy generated references, the authors fine-tune the tracker in a closed-loop setup while keeping the motion generator frozen, improving robustness.
- The approach enables directional goal-reaching with terrain-aware whole-body adaptation and is validated on a Unitree G1 humanoid robot using onboard perception and computation across boxes, hurdles, stairs, and mixed terrains.
- Quantitative experiments show that online motion generation and closed-loop fine-tuning of the motion tracker improve generalization and robustness compared with alternatives that rely more heavily on replaying fixed reference motions.
Related Articles

A practical guide to getting comfortable with AI coding tools
Dev.to

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

🚀 Major BrowserAct CLI Update
Dev.to