AI Navigate

DOS: Dependency-Oriented Sampler for Masked Diffusion Language Models

arXiv cs.CL / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • DOS is a training-free decoding strategy for masked diffusion language models that leverages inter-token dependencies to guide token updates during generation.
  • It uses attention matrices from Transformer blocks to approximate inter-token dependencies and prioritizes information from unmasked tokens when updating masked positions.
  • Empirical results show that DOS improves performance on code generation and mathematical reasoning tasks and can be integrated with existing parallel sampling methods to boost efficiency without sacrificing quality.
  • By emphasizing sequence-level information, DOS highlights the importance of token dependencies in decoding MDLMs and offers a practical approach to more efficient generation.

Abstract

Masked diffusion language models (MDLMs) have recently emerged as a new paradigm in language modeling, offering flexible generation dynamics and enabling efficient parallel decoding. However, existing decoding strategies for pre-trained MDLMs predominantly rely on token-level uncertainty criteria, while largely overlooking sequence-level information and inter-token dependencies. To address this limitation, we propose Dependency-Oriented Sampler (DOS), a training-free decoding strategy that leverages inter-token dependencies to inform token updates during generation. Specifically, DOS exploits attention matrices from transformer blocks to approximate inter-token dependencies, emphasizing information from unmasked tokens when updating masked positions. Empirical results demonstrate that DOS consistently achieves superior performance on both code generation and mathematical reasoning tasks. Moreover, DOS can be seamlessly integrated with existing parallel sampling methods, leading to improved generation efficiency without sacrificing generation quality.