EgoSim: Egocentric World Simulator for Embodied Interaction Generation

arXiv cs.CV / 4/2/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • EgoSim is a closed-loop egocentric 3D world simulator designed to generate spatially consistent interaction videos while persistently updating the underlying 3D scene state across multi-stage interactions.
  • The approach combines a Geometry-action-aware Observation Simulation model with an Interaction-aware State Updating module to reduce structural drift and handle non-static world changes during simulation.
  • To address scarce aligned training data, EgoSim uses a scalable pipeline that extracts point clouds, camera trajectories, and embodiment actions from large-scale in-the-wild monocular egocentric videos.
  • The accompanying EgoCap low-cost capture system uses uncalibrated smartphones to collect real-world data, enabling broader training and evaluation.
  • Experiments reportedly show EgoSim outperforms prior methods on visual quality, spatial consistency, generalization to complex scenes, and supports cross-embodiment transfer for robotic manipulation, with code and datasets planned to be released soon.

Abstract

We introduce EgoSim, a closed-loop egocentric world simulator that generates spatially consistent interaction videos and persistently updates the underlying 3D scene state for continuous simulation. Existing egocentric simulators either lack explicit 3D grounding, causing structural drift under viewpoint changes, or treat the scene as static, failing to update world states across multi-stage interactions. EgoSim addresses both limitations by modeling 3D scenes as updatable world states. We generate embodiment interactions via a Geometry-action-aware Observation Simulation model, with spatial consistency from an Interaction-aware State Updating module. To overcome the critical data bottleneck posed by the difficulty in acquiring densely aligned scene-interaction training pairs, we design a scalable pipeline that extracts static point clouds, camera trajectories, and embodiment actions from in-the-wild large-scale monocular egocentric videos. We further introduce EgoCap, a capture system that enables low-cost real-world data collection with uncalibrated smartphones. Extensive experiments demonstrate that EgoSim significantly outperforms existing methods in terms of visual quality, spatial consistency, and generalization to complex scenes and in-the-wild dexterous interactions, while supporting cross-embodiment transfer to robotic manipulation. Codes and datasets will be open soon. The project page is at egosimulator.github.io.