DualDiffusion: A Speculative Decoding Strategy for Masked Diffusion Models

arXiv cs.LG / 4/8/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Masked Diffusion Models can generate tokens in parallel and use bidirectional context, but their inference remains slow because bidirectional attention prevents effective key-value caching, leading to O(N^2) computation per step.
  • Prior speedups such as FastDLLM and DkvCache reduce generation steps by approximating attention and using caching strategies, but they often trade off generation quality.
  • DualDiffusion introduces a speculative decoding framework that alternates between a fast drafter (with efficient approximations) and a slower verifier (with higher-fidelity modeling).
  • By running several lightweight drafter steps followed by a single verification, DualDiffusion improves the quality–efficiency trade-off, producing a better Pareto frontier than earlier methods.
  • Experiments on MMLU and GSM8K show DualDiffusion preserves high accuracy while requiring fewer generation steps, effectively pushing the performance/efficiency curve for masked diffusion language models.

Abstract

Masked Diffusion Models (MDMs) offer a promising alternative to autoregressive language models by enabling parallel token generation and bidirectional context modeling. However, their inference speed is significantly limited by the inability to cache key-value pairs due to bidirectional attention, requiring O(N^2) computations at each generation step. While recent methods like FastDLLM and DkvCache improve inference speed through attention approximations and caching strategies, they achieve speedups at the cost of generation quality. We propose DualDiffusion, a speculative decoding framework for MDMs that combines fast drafter models (using efficient approximations) with slower, more accurate verifier models. By running multiple steps of a lightweight drafter followed by a single verification step, DualDiffusion achieves a superior Pareto frontier between generation steps and accuracy compared to existing approaches. We evaluate our method on MMLU and GSM8K, demonstrating that DualDiffusion maintains high accuracy while reducing the number of generation steps required, effectively pushing the quality-efficiency trade-off curve for masked diffusion language models.