LMGenDrive: Bridging Multimodal Understanding and Generative World Modeling for End-to-End Driving

arXiv cs.AI / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • LMGenDrive is presented as a unified end-to-end autonomous driving framework that combines LLM-based multimodal understanding with generative world modeling.
  • The model takes multi-view camera inputs plus natural-language instructions and outputs both future driving videos (spatiotemporal prediction) and control signals for closed-loop driving.
  • The approach argues that generative video prediction strengthens spatiotemporal scene modeling, while LLM pretraining provides semantic priors and better instruction grounding.
  • A progressive three-stage training strategy (from vision pretraining to long-horizon multi-step driving) is proposed to improve training stability and performance.
  • Experiments on closed-loop benchmarks reportedly show significant gains in instruction following, spatiotemporal understanding, and robustness to rare scenarios, including both low-latency online planning and offline autoregressive video generation.

Abstract

Recent years have seen remarkable progress in autonomous driving, yet generalization to long-tail and open-world scenarios remains a major bottleneck for large-scale deployment. To address this challenge, some works use LLMs and VLMs for vision-language understanding and reasoning, enabling vehicles to interpret rare and safety-critical situations when generating actions. Others study generative world models to capture the spatio-temporal evolution of driving scenes, allowing agents to imagine possible futures before acting. Inspired by human intelligence, which unifies understanding and imagination, we explore a unified model for autonomous driving. We present LMGenDrive, the first framework that combines LLM-based multimodal understanding with generative world models for end-to-end closed-loop driving. Given multi-view camera inputs and natural-language instructions, LMGenDrive generates both future driving videos and control signals. This design provides complementary benefits: video prediction improves spatio-temporal scene modeling, while the LLM contributes strong semantic priors and instruction grounding from large-scale pretraining. We further propose a progressive three-stage training strategy, from vision pretraining to multi-step long-horizon driving, to improve stability and performance. LMGenDrive supports both low-latency online planning and autoregressive offline video generation. Experiments show that it significantly outperforms prior methods on challenging closed-loop benchmarks, with clear gains in instruction following, spatio-temporal understanding, and robustness to rare scenarios. These results suggest that unifying multimodal understanding and generation is a promising direction for more generalizable and robust embodied decision-making systems.

LMGenDrive: Bridging Multimodal Understanding and Generative World Modeling for End-to-End Driving | AI Navigate