SpanVLA: Efficient Action Bridging and Learning from Negative-Recovery Samples for Vision-Language-Action Model

arXiv cs.CV / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces SpanVLA, an end-to-end vision-language-action autonomous driving framework aimed at improving long-tail performance, robustness, and latency versus prior VLA systems.
  • SpanVLA uses an efficient “action bridging” approach that leverages vision-language model guidance for trajectory planning while employing a flow-matching action expert to reduce inference time.
  • The method includes GRPO-based post-training so the model learns not only from positive driving demonstrations but also from negative behaviors and how to recover from them.
  • The authors contribute mReasoning, a new real-world driving reasoning dataset targeting complex, reasoning-heavy scenarios and negative-recovery samples.
  • Experiments on NAVSIM (v1 and v2) show competitive performance, with qualitative results indicating improved planning quality and robustness across diverse scenarios.

Abstract

Vision-Language-Action (VLA) models offer a promising autonomous driving paradigm for leveraging world knowledge and reasoning capabilities, especially in long-tail scenarios. However, existing VLA models often struggle with the high latency in action generation using an autoregressive generation framework and exhibit limited robustness. In this paper, we propose SpanVLA, a novel end-to-end autonomous driving framework, integrating an autoregressive reasoning and a flow-matching action expert. First, SpanVLA introduces an efficient bridge to leverage the vision and reasoning guidance of VLM to efficiently plan future trajectories using a flow-matching policy conditioned on historical trajectory initialization, which significantly reduces inference time. Second, to further improve the performance and robustness of the SpanVLA model, we propose a GRPO-based post-training method to enable the VLA model not only to learn from positive driving samples but also to learn how to avoid the typical negative behaviors and learn recovery behaviors. We further introduce mReasoning, a new real-world driving reasoning dataset, focusing on complex, reasoning-demanding scenarios and negative-recovery samples. Extensive experiments on the NAVSIM (v1 and v2) demonstrate the competitive performance of the SpanVLA model. Additionally, the qualitative results across diverse scenarios highlight the planning performance and robustness of our model.

SpanVLA: Efficient Action Bridging and Learning from Negative-Recovery Samples for Vision-Language-Action Model | AI Navigate