Fast-dVLM: Efficient Block-Diffusion VLM via Direct Conversion from Autoregressive VLM

arXiv cs.CL / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Fast-dVLM proposes an efficient block-diffusion vision-language model that improves inference throughput over autoregressive VLMs by enabling KV-cache-compatible parallel decoding and speculative block decoding.
  • The work addresses the main challenge of adapting diffusion to multimodal VLMs, including handling continuous visual representations alongside discrete text tokens while preserving pretrained multimodal capabilities.
  • It compares two AR-to-diffusion conversion strategies and finds that direct conversion of the full autoregressive VLM in one stage is substantially more efficient than a two-stage text-only diffusion adaptation under similar training budgets.
  • Fast-dVLM includes multiple multimodal diffusion adaptations (e.g., block size annealing, causal context attention, auto-truncation masking, and vision efficient concatenation) to make block diffusion effective in the VLM setting.
  • Experiments across 11 multimodal benchmarks show matched generation quality versus autoregressive decoding, and with SGLang integration plus FP8 quantization it delivers 6x+ end-to-end speedup.

Abstract

Vision-language models (VLMs) predominantly rely on autoregressive decoding, which generates tokens one at a time and fundamentally limits inference throughput. This limitation is especially acute in physical AI scenarios such as robotics and autonomous driving, where VLMs are deployed on edge devices at batch size one, making AR decoding memory-bandwidth-bound and leaving hardware parallelism underutilized. While block-wise discrete diffusion has shown promise for parallel text generation, extending it to VLMs remains challenging due to the need to jointly handle continuous visual representations and discrete text tokens while preserving pretrained multimodal capabilities. We present Fast-dVLM, a block-diffusion-based VLM that enables KV-cache-compatible parallel decoding and speculative block decoding for inference acceleration. We systematically compare two AR-to-diffusion conversion strategies: a two-stage approach that first adapts the LLM backbone with text-only diffusion fine-tuning before multimodal training, and a direct approach that converts the full AR VLM in one stage. Under comparable training budgets, direct conversion proves substantially more efficient by leveraging the already multimodally aligned VLM; we therefore adopt it as our recommended recipe. We introduce a suite of multimodal diffusion adaptations, block size annealing, causal context attention, auto-truncation masking, and vision efficient concatenation, that collectively enable effective block diffusion in the VLM setting. Extensive experiments across 11 multimodal benchmarks show Fast-dVLM matches its autoregressive counterpart in generation quality. With SGLang integration and FP8 quantization, Fast-dVLM achieves over 6x end-to-end inference speedup over the AR baseline.