Fast-dVLM: Efficient Block-Diffusion VLM via Direct Conversion from Autoregressive VLM
arXiv cs.CL / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Fast-dVLM proposes an efficient block-diffusion vision-language model that improves inference throughput over autoregressive VLMs by enabling KV-cache-compatible parallel decoding and speculative block decoding.
- The work addresses the main challenge of adapting diffusion to multimodal VLMs, including handling continuous visual representations alongside discrete text tokens while preserving pretrained multimodal capabilities.
- It compares two AR-to-diffusion conversion strategies and finds that direct conversion of the full autoregressive VLM in one stage is substantially more efficient than a two-stage text-only diffusion adaptation under similar training budgets.
- Fast-dVLM includes multiple multimodal diffusion adaptations (e.g., block size annealing, causal context attention, auto-truncation masking, and vision efficient concatenation) to make block diffusion effective in the VLM setting.
- Experiments across 11 multimodal benchmarks show matched generation quality versus autoregressive decoding, and with SGLang integration plus FP8 quantization it delivers 6x+ end-to-end speedup.
Related Articles

Black Hat Asia
AI Business
Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial
Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to