DriVerse: Navigation World Model for Driving Simulation via Multimodal Trajectory Prompting and Motion Alignment

arXiv cs.RO / 4/28/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • DriVerse is a generative driving world model that simulates navigation-driven driving scenes from a single image plus a specified future trajectory.
  • The paper argues that prior world-model approaches misalign trajectory/control inputs with the implicit features of 2D generative backbones, causing low-fidelity video results.
  • DriVerse improves guidance by tokenizing trajectories into text prompts via a predefined trend vocabulary and by converting 3D trajectories into 2D motion priors to better control scene elements.
  • For dynamic objects, it adds a lightweight motion alignment module that enforces inter-frame consistency of dynamic pixels to enhance temporal coherence across long video sequences.
  • Experiments on nuScenes and Waymo show DriVerse outperforms specialized models for future video generation with minimal training and no extra data, and the authors plan to release the code and models publicly.

Abstract

This paper presents DriVerse, a generative model for simulating navigation-driven driving scenes from a single image and a future trajectory. Previous autonomous driving world models either directly feed the trajectory or discrete control signals into the generation pipeline, leading to poor alignment between the control inputs and the implicit features of the 2D base generative model, which results in low-fidelity video outputs. Some methods use coarse textual commands or discrete vehicle control signals, which lack the precision to guide fine-grained, trajectory-specific video generation, making them unsuitable for evaluating actual autonomous driving algorithms. DriVerse introduces explicit trajectory guidance in two complementary forms: it tokenizes trajectories into textual prompts using a predefined trend vocabulary for seamless language integration, and converts 3D trajectories into 2D spatial motion priors to enhance control over static content within the driving scene. To better handle dynamic objects, we further introduce a lightweight motion alignment module, which focuses on the inter-frame consistency of dynamic pixels, significantly enhancing the temporal coherence of moving elements over long sequences. With minimal training and no need for additional data, DriVerse outperforms specialized models on future video generation tasks across both the nuScenes and Waymo datasets. The code and models will be released to the public.