ScenarioControl: Vision-Language Controllable Vectorized Latent Scenario Generation

arXiv cs.RO / 4/21/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • ScenarioControl is presented as a vision-language control mechanism for learned driving scenario generation, taking a text prompt or an input image to synthesize realistic 3D scenario rollouts.
  • The approach generates temporally consistent scenes that include road maps, reactive actors (with 3D bounding boxes over time), pedestrians, driving infrastructure, and ego-camera observations.
  • It operates in a vectorized latent space jointly representing road structure and dynamic agents, and uses a cross-global control mechanism combining cross-attention with a lightweight global-context branch to improve controllability while maintaining realism.
  • The authors release a training/evaluation dataset with text annotations aligned to vectorized map structures and report that ScenarioControl achieves strong control adherence and fidelity versus compared methods.
  • The resulting system supports long-horizon continuation of driving scenarios and can generate rollouts from different actors’ perspectives in a coordinated way.

Abstract

We introduce ScenarioControl, the first vision-language control mechanism for learned driving scenario generation. Given a text prompt or an input image, Scenario-Control synthesizes diverse, realistic 3D scenario rollouts - including map, 3D boxes of reactive actors over time, pedestrians, driving infrastructure, and ego camera observations. The method generates scenes in a vectorized latent space that represents road structure and dynamic agents jointly. To connect multimodal control with sparse vectorized scene elements, we propose a cross-global control mechanism that integrates crossattention with a lightweight global-context branch, enabling fine-grained control over road layout and traffic conditions while preserving realism. The method produces temporally consistent scenario rollouts from the perspectives different actors in the scene, supporting long-horizon continuation of driving scenarios. To facilitate training and evaluation, we release a dataset with text annotations aligned to vectorized map structures. Extensive experiments validate that the control adherence and fidelity of ScenarioControl compare favorable to all tested methods across all experiments. Project webpage: https://light.princeton.edu/ScenarioControl