Wanderland: Geometrically Grounded Simulation for Open-World Embodied AI

arXiv cs.RO / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Wanderland is presented as a real-to-sim framework aimed at enabling reproducible, closed-loop evaluation for open-world embodied AI tasks like visual navigation.
  • The approach combines multi-sensor capture, reliable scene reconstruction, accurate geometric grounding, and robust view synthesis to reduce sim-to-real gaps common in prior methods.
  • The paper argues that existing image-only pipelines scale poorly, and it demonstrates how reconstruction/geometry quality directly affects novel view synthesis quality.
  • It further shows that these sensing and rendering limitations can adversely impact navigation policy learning and the reliability of evaluation results.
  • The dataset and raw sensor data are positioned as a benchmark/testbed not only for embodied navigation but also for 3D reconstruction and novel view synthesis model evaluation.

Abstract

Reproducible closed-loop evaluation remains a major bottleneck in Embodied AI such as visual navigation. A promising path forward is high-fidelity simulation that combines photorealistic sensor rendering with geometrically grounded interaction in complex, open-world urban environments. Although recent video-3DGS methods ease open-world scene capturing, they are still unsuitable for benchmarking due to large visual and geometric sim-to-real gaps. To address these challenges, we introduce Wanderland, a real-to-sim framework that features multi-sensor capture, reliable reconstruction, accurate geometry, and robust view synthesis. Using this pipeline, we curate a diverse dataset of indoor-outdoor urban scenes and systematically demonstrate how image-only pipelines scale poorly, how geometry quality impacts novel view synthesis, and how all of these adversely affect navigation policy learning and evaluation reliability. Beyond serving as a trusted testbed for embodied navigation, Wanderland's rich raw sensor data further allows benchmarking of 3D reconstruction and novel view synthesis models. Our work establishes a new foundation for reproducible research in open-world embodied AI. Project website is at https://ai4ce.github.io/wanderland/.