Visually-grounded Humanoid Agents

arXiv cs.RO / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes “Visually-grounded Humanoid Agents,” aiming to let digital humans actively behave in novel 3D scenes using only visual observations and specified goals, rather than scripted control or privileged state.
  • It introduces a two-layer world-agent framework: a World Layer that reconstructs semantically rich 3D Gaussian scenes from real-world videos and supports animatable Gaussian-based human avatars, paired with an Agent Layer for autonomous humanoid control.
  • The Agent Layer equips avatars with first-person RGB-D perception to perform embodied planning with spatial awareness and iterative reasoning, which is executed via low-level full-body actions.
  • The authors also release a benchmark for evaluating humanoid-scene interaction across diverse reconstructed environments.
  • Experiments report more robust autonomous behavior, with higher task success rates and fewer collisions than ablations and competing planning methods, and the authors plan to open-source data, code, and models.

Abstract

Digital human generation has been studied for decades and supports a wide range of real-world applications. However, most existing systems are passively animated, relying on privileged state or scripted control, which limits scalability to novel environments. We instead ask: how can digital humans actively behave using only visual observations and specified goals in novel scenes? Achieving this would enable populating any 3D environments with digital humans at scale that exhibit spontaneous, natural, goal-directed behaviors. To this end, we introduce Visually-grounded Humanoid Agents, a coupled two-layer (world-agent) paradigm that replicates humans at multiple levels: they look, perceive, reason, and behave like real people in real-world 3D scenes. The World Layer reconstructs semantically rich 3D Gaussian scenes from real-world videos via an occlusion-aware pipeline and accommodates animatable Gaussian-based human avatars. The Agent Layer transforms these avatars into autonomous humanoid agents, equipping them with first-person RGB-D perception and enabling them to perform accurate, embodied planning with spatial awareness and iterative reasoning, which is then executed at the low level as full-body actions to drive their behaviors in the scene. We further introduce a benchmark to evaluate humanoid-scene interaction in diverse reconstructed environments. Experiments show our agents achieve robust autonomous behavior, yielding higher task success rates and fewer collisions than ablations and state-of-the-art planning methods. This work enables active digital human population and advances human-centric embodied AI. Data, code, and models will be open-sourced.