HY-World 2.0: A Multi-Modal World Model for Reconstructing, Generating, and Simulating 3D Worlds

arXiv cs.CV / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • HY-World 2.0 is a multi-modal world model that takes text, single-view images, multi-view images, and videos as inputs to produce 3D world representations.
  • Using text or single-view inputs, it generates high-fidelity, navigable 3D Gaussian Splatting (3DGS) scenes via a four-stage pipeline: Panorama Generation (HY-Pano 2.0), Trajectory Planning (WorldNav), World Expansion (WorldStereo 2.0), and World Composition (WorldMirror 2.0).
  • The framework introduces upgrades to panorama fidelity and improves both 3D scene understanding/planning and multi-view/video-based reconstruction through refinements to WorldStereo and WorldMirror.
  • It also provides WorldLens, a high-performance, engine-agnostic 3DGS rendering platform with features like automatic IBL lighting, efficient collision detection, and training-rendering co-design to support interactive exploration with characters.
  • Experiments on multiple benchmarks show state-of-the-art results among open-source methods, with performance comparable to the closed-source model Marble, and the authors release model weights, code, and technical details for reproducibility.

Abstract

We introduce HY-World 2.0, a multi-modal world model framework that advances our prior project HY-World 1.0. HY-World 2.0 accommodates diverse input modalities, including text prompts, single-view images, multi-view images, and videos, and produces 3D world representations. With text or single-view image inputs, the model performs world generation, synthesizing high-fidelity, navigable 3D Gaussian Splatting (3DGS) scenes. This is achieved through a four-stage method: a) Panorama Generation with HY-Pano 2.0, b) Trajectory Planning with WorldNav, c) World Expansion with WorldStereo 2.0, and d) World Composition with WorldMirror 2.0. Specifically, we introduce key innovations to enhance panorama fidelity, enable 3D scene understanding and planning, and upgrade WorldStereo, our keyframe-based view generation model with consistent memory. We also upgrade WorldMirror, a feed-forward model for universal 3D prediction, by refining model architecture and learning strategy, enabling world reconstruction from multi-view images or videos. Also, we introduce WorldLens, a high-performance 3DGS rendering platform featuring a flexible engine-agnostic architecture, automatic IBL lighting, efficient collision detection, and training-rendering co-design, enabling interactive exploration of 3D worlds with character support. Extensive experiments demonstrate that HY-World 2.0 achieves state-of-the-art performance on several benchmarks among open-source approaches, delivering results comparable to the closed-source model Marble. We release all model weights, code, and technical details to facilitate reproducibility and support further research on 3D world models.