AI Navigate

MANSION: Multi-floor lANguage-to-3D Scene generatIOn for loNg-horizon tasks

arXiv cs.CV / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MANSION, a language-driven framework that generates building-scale, multi-floor 3D environments for long-horizon robotic tasks.
  • MansionWorld, a dataset with over 1,000 diverse buildings (from hospitals to offices), and a Task-Semantic Scene Editing Agent enabling open-vocabulary customization are released alongside the framework.
  • The framework accounts for vertical structural constraints to create realistic, navigable buildings suitable for cross-floor planning and evaluation.
  • Benchmark results show state-of-the-art agents degrade sharply in these settings, establishing MANSION as a critical testbed for next-generation spatial reasoning and planning.

Abstract

Real-world robotic tasks are long-horizon and often span multiple floors, demanding rich spatial reasoning. However, existing embodied benchmarks are largely confined to single-floor in-house environments, failing to reflect the complexity of real-world tasks. We introduce MANSION, the first language-driven framework for generating building-scale, multi-floor 3D environments. Being aware of vertical structural constraints, MANSION generates realistic, navigable whole-building structures with diverse, human-friendly scenes, enabling the development and evaluation of cross-floor long-horizon tasks. Building on this framework, we release MansionWorld, a dataset of over 1,000 diverse buildings ranging from hospitals to offices, alongside a Task-Semantic Scene Editing Agent that customizes these environments using open-vocabulary commands to meet specific user needs. Benchmarking reveals that state-of-the-art agents degrade sharply in our settings, establishing MANSION as a critical testbed for the next generation of spatial reasoning and planning.