DepCap: Adaptive Block-Wise Parallel Decoding for Efficient Diffusion LM Inference

arXiv cs.LG / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • Diffusion language models can decode in parallel and refine the full sequence globally, but inference must balance generation quality against decoding speed.
  • Existing block-wise DLM decoding approaches often use fixed block schedules or weak local signals for block boundaries, and rely on conservative confidence-based parallel decoding that can limit the quality-speed trade-off.
  • The DepCap framework introduces better decision signals for block boundaries (using cross-step information from the influence of the last decoded block) and for safe parallel decoding (selecting a conflict-free token subset within each block).
  • DepCap is training-free and plug-and-play, making it compatible with multiple diffusion LMs and existing KV-cache strategies for block-wise inference.
  • Experiments on several DLM backbones and reasoning/coding benchmarks report up to 5.63× speedup with negligible quality degradation, supported by an information-theoretic rationale for its block-partitioning criterion.

Abstract

Diffusion language models (DLMs) have emerged as a promising alternative to autoregressive language generation due to their potential for parallel decoding and global refinement of the entire sequence. To unlock this potential, DLM inference must carefully balance generation quality and decoding speed. Recent block-wise DLM decoding methods improve this trade-off by performing diffusion-based decoding sequentially in blocks. However, existing methods typically rely on fixed block schedules or current-step local signals to determine block boundaries, and use conservative confidence-based parallel decoding to avoid conflicts, limiting the quality-speed trade-off. In this paper, we argue that block-wise DLM inference requires more suitable signals for its two core decisions: cross-step signals for determining block boundaries, and token-level conflict signals for parallel decoding. Based on this view, we propose DepCap, a training-free framework for efficient block-wise DLM inference. Specifically, DepCap instantiates the cross-step signal as the influence of the last decoded block and uses it to adaptively determine how far the next block should extend, while identifying a conflict-free subset of tokens for safe parallel decoding within each block, enabling substantial inference acceleration with negligible quality degradation. DepCap is a plug-and-play method applicable to various DLMs, and compatible with existing KV-cache strategies for block-wise DLM. An information-theoretic analysis further suggests that the cumulative last-block influence on a candidate block is approximately additive across tokens, supporting the proposed block-partitioning criterion. Experimental results show that DepCap achieves favorable speed-quality trade-offs across multiple DLM backbones and reasoning and coding benchmarks, with up to 5.63\times speedup without significant performance degradation.