ParallelVLM: Lossless Video-LLM Acceleration with Visual Alignment Aware Parallel Speculative Decoding

arXiv cs.CV / 3/23/2026

📰 NewsModels & Research

Key Points

  • ParallelVLM is a training-free draft-then-verify speculative decoding framework for video LLMs that uses two parallel stages and an Unbiased Verifier-Guided Pruning strategy to reduce positional bias and better align draft and target models.
  • The approach addresses mutual waiting and limited speedup issues in long-video decoding to improve hardware utilization and inference efficiency.
  • It achieves lossless acceleration by expanding the draft window by about 1.6–1.8x while maintaining high accepted lengths.
  • Experimental results show substantial speedups over vanilla autoregressive decoding, e.g., 3.36x on LLaVA-Onevision-72B and 2.42x on Qwen2.5-VL-32B.

Abstract

Although current Video-LLMs achieve impressive performance in video understanding tasks, their autoregressive decoding efficiency remains constrained by the massive number of video tokens. Visual token pruning can partially ease this bottleneck, yet existing approaches still suffer from information loss and yield only modest acceleration in decoding. In this paper, we propose ParallelVLM, a training-free draft-then-verify speculative decoding framework that overcomes both mutual waiting and limited speedup-ratio problems between draft and target models in long-video settings. ParallelVLM features two parallelized stages that maximize hardware utilization and incorporate an Unbiased Verifier-Guided Pruning strategy to better align the draft and target models by eliminating the positional bias in attention-guided pruning. Extensive experiments demonstrate that ParallelVLM effectively expands the draft window by 1.6\sim1.8\times with high accepted lengths, and accelerates various video understanding benchmarks by 3.36\times on LLaVA-Onevision-72B and 2.42\times on Qwen2.5-VL-32B compared with vanilla autoregressive decoding.