Co-generation of Layout and Shape from Text via Autoregressive 3D Diffusion

arXiv cs.CV / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper introduces a sequential text-to-scene generation paradigm that jointly produces both scene layout and object shape/appearance, addressing limitations of prior methods that generate only one aspect.
  • It proposes a new 3D autoregressive diffusion model (3D-ARD+) that unifies autoregressive generation over multimodal tokens with diffusion-based generation of next-object 3D latents.
  • For each next object, the model uses a two-stage process: first generating coarse 3D latents in the scene space conditioned on the text and the already synthesized scene, then generating finer object-space latents for detailed geometry and appearance.
  • The method is trained on a large dataset of 230K indoor scenes paired with text instructions, and experiments with a 7B-parameter model show it can follow non-trivial spatial layouts and semantics from the text.
  • Overall, the work targets interactive 3D scene creation by improving consistency between generated scenes and complex textual descriptions of spatial arrangement, shape, and appearance.

Abstract

Recent text-to-scene generation approaches largely reduced the manual efforts required to create 3D scenes. However, their focus is either to generate a scene layout or to generate objects, and few generate both. The generated scene layout is often simple even with LLM's help. Moreover, the generated scene is often inconsistent with the text input that contains non-trivial descriptions of the shape, appearance, and spatial arrangement of the objects. We present a new paradigm of sequential text-to-scene generation and propose a novel generative model for interactive scene creation. At the core is a 3D Autoregressive Diffusion model 3D-ARD+, which unifies the autoregressive generation over a multimodal token sequence and diffusion generation of next-object 3D latents. To generate the next object, the model uses one autoregressive step to generate the coarse-grained 3D latents in the scene space, conditioned on both the current seen text instructions and already synthesized 3D scene. It then uses a second step to generate the 3D latents in the smaller object space, which can be decoded into fine-grained object geometry and appearance. We curate a large dataset of 230K indoor scenes with paired text instructions for training. We evaluate 7B 3D-ARD+, on challenging scenes, and showcase the model can generate and place objects following non-trivial spatial layout and semantics prescribed by the text instructions.