SpanVLA: Efficient Action Bridging and Learning from Negative-Recovery Samples for Vision-Language-Action Model
arXiv cs.CV / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper introduces SpanVLA, an end-to-end vision-language-action autonomous driving framework aimed at improving long-tail performance, robustness, and latency versus prior VLA systems.
- SpanVLA uses an efficient “action bridging” approach that leverages vision-language model guidance for trajectory planning while employing a flow-matching action expert to reduce inference time.
- The method includes GRPO-based post-training so the model learns not only from positive driving demonstrations but also from negative behaviors and how to recover from them.
- The authors contribute mReasoning, a new real-world driving reasoning dataset targeting complex, reasoning-heavy scenarios and negative-recovery samples.
- Experiments on NAVSIM (v1 and v2) show competitive performance, with qualitative results indicating improved planning quality and robustness across diverse scenarios.
Related Articles

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA

Building a Visual Infrastructure Layer: How We’re Solving the "Visual Trust Gap" for E-com
Dev.to
Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan)
Reddit r/LocalLLaMA