LivingWorld: Interactive 4D World Generation with Environmental Dynamics

arXiv cs.CV / 4/3/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • LivingWorld is an interactive framework that generates 4D worlds with environment dynamics (e.g., clouds, water, smoke) starting from a single image rather than producing mostly static 3D geometry.
  • The method maintains global temporal/coherent motion across an expanding scene by progressively building a globally consistent motion field and using a geometry-aware alignment module to resolve directional and scale ambiguities across views.
  • It represents motion with a compact hash-based motion field that supports efficient querying and stable propagation of dynamics throughout the scene, improving runtime feasibility.
  • The system enables bidirectional motion propagation during rendering to generate long, temporally coherent 4D sequences without expensive video-based refinement.
  • Performance claims indicate interactive generation on a single RTX 5090, with about 9 seconds per expansion step plus ~3 seconds for motion alignment/updates, and video demos are provided online.

Abstract

We introduce LivingWorld, an interactive framework for generating 4D worlds with environmental dynamics from a single image. While recent advances in 3D scene generation enable large-scale environment creation, most approaches focus primarily on reconstructing static geometry, leaving scene-scale environmental dynamics such as clouds, water, or smoke largely unexplored. Modeling such dynamics is challenging because motion must remain coherent across an expanding scene while supporting low-latency user feedback. LivingWorld addresses this challenge by progressively constructing a globally coherent motion field as the scene expands. To maintain global consistency during expansion, we introduce a geometry-aware alignment module that resolves directional and scale ambiguities across views. We further represent motion using a compact hash-based motion field, enabling efficient querying and stable propagation of dynamics throughout the scene. This representation also supports bidirectional motion propagation during rendering, producing long and temporally coherent 4D sequences without relying on expensive video-based refinement. On a single RTX 5090 GPU, generating each new scene expansion step requires 9 seconds, followed by 3 seconds for motion alignment and motion field updates, enabling interactive 4D world generation with globally coherent environmental dynamics. Video demonstrations are available at cvsp-lab.github.io/LivingWorld.