Morphology-Consistent Humanoid Interaction through Robot-Centric Video Synthesis

arXiv cs.RO / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Dream2Act introduces a robot-centric, zero-shot interaction framework that uses a third-person image of the robot and a target object to synthesize plausible robot motion via video generation, avoiding morphology gaps from human-to-robot retargeting.
  • It relies on a high-fidelity pose extraction system to recover feasible robot-native joint trajectories from the synthesized dreams and executes them with a general-purpose whole-body controller within the robot's native coordinate space.
  • By staying in robot-native coordinates and not requiring task-specific policy training, it overcomes the morphology mismatch and retargeting errors that hinder contact formation.
  • In Unitree G1 experiments on four whole-body tasks (ball kicking, sofa sitting, bag punching, box hugging), Dream2Act achieves 37.5% success vs 0% for conventional retargeting, demonstrating substantially improved interaction reliability.

Abstract

Equipping humanoid robots with versatile interaction skills typically requires either extensive policy training or explicit human-to-robot motion retargeting. However, learning-based policies face prohibitive data collection costs. Meanwhile, retargeting relies on human-centric pose estimation (e.g., SMPL), introducing a morphology gap. Skeletal scale mismatches result in severe spatial misalignments when mapped to robots, compromising interaction success. In this work, we propose Dream2Act, a robot-centric framework enabling zero-shot interaction through generative video synthesis. Given a third-person image of the robot and target object, our framework leverages video generation models to envision the robot completing the task with morphology-consistent motion. We employ a high-fidelity pose extraction system to recover physically feasible, robot-native joint trajectories from these synthesized dreams, subsequently executed via a general-purpose whole-body controller. Operating strictly within the robot-native coordinate space, Dream2Act avoids retargeting errors and eliminates task-specific policy training. We evaluate Dream2Act on the Unitree G1 across four whole-body mobile interaction tasks: ball kicking, sofa sitting, bag punching, and box hugging. Dream2Act achieves a 37.5% overall success rate, compared to 0% for conventional retargeting. While retargeting fails to establish correct physical contacts due to the morphology gap (with errors compounded during locomotion), Dream2Act maintains robot-consistent spatial alignment, enabling reliable contact formation and substantially higher task completion.