DVGT-2: Vision-Geometry-Action Model for Autonomous Driving at Scale

arXiv cs.CV / 4/2/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new Vision-Geometry-Action (VGA) paradigm for autonomous driving that emphasizes dense 3D geometry as the primary cue for decision-making rather than sparse perception or language-augmented planning used in VLA models.
  • It proposes DVGT-2, a streaming Driving Visual Geometry Transformer that performs online inference by outputting dense geometry and trajectory planning for the current frame.
  • DVGT-2 achieves real-time applicability by using temporal causal attention, caching historical features, and a sliding-window streaming strategy to reduce repetitive computation.
  • The method reports improved dense geometry reconstruction performance while maintaining faster speed across multiple datasets.
  • A key claim is transferability: the same trained DVGT-2 can be applied to planning across different camera configurations without fine-tuning, validated on closed-loop NAVSIM and open-loop nuScenes benchmarks.

Abstract

End-to-end autonomous driving has evolved from the conventional paradigm based on sparse perception into vision-language-action (VLA) models, which focus on learning language descriptions as an auxiliary task to facilitate planning. In this paper, we propose an alternative Vision-Geometry-Action (VGA) paradigm that advocates dense 3D geometry as the critical cue for autonomous driving. As vehicles operate in a 3D world, we think dense 3D geometry provides the most comprehensive information for decision-making. However, most existing geometry reconstruction methods (e.g., DVGT) rely on computationally expensive batch processing of multi-frame inputs and cannot be applied to online planning. To address this, we introduce a streaming Driving Visual Geometry Transformer (DVGT-2), which processes inputs in an online manner and jointly outputs dense geometry and trajectory planning for the current frame. We employ temporal causal attention and cache historical features to support on-the-fly inference. To further enhance efficiency, we propose a sliding-window streaming strategy and use historical caches within a certain interval to avoid repetitive computations. Despite the faster speed, DVGT-2 achieves superior geometry reconstruction performance on various datasets. The same trained DVGT-2 can be directly applied to planning across diverse camera configurations without fine-tuning, including closed-loop NAVSIM and open-loop nuScenes benchmarks.