AI Navigate

Mask Is What DLLM Needs: A Masked Data Training Paradigm for Diffusion LLMs

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an Information Density Driven Smart Noise Scheduler for diffusion language models to address non-uniform information density in real-world sequences.
  • It introduces Complementary Priority Masking to decouple a training instance into mutually reinforcing reasoning and syntax samples, enabling the model to master both logical deduction and foundational sequence structure.
  • Experiments show an average ~4% accuracy improvement across four Code and Math reasoning benchmarks, outperforming uniform baselines.
  • Mechanistic analyses reveal that probabilistic priority masking mitigates contextual collapse during block diffusion training, and the processed dataset is available at https://huggingface.co/datasets/malr07/opc-sft-stage2-dense-extracted.

Abstract

Discrete diffusion models offer global context awareness and flexible parallel generation. However, uniform random noise schedulers in standard DLLM training overlook the highly non-uniform information density inherent in real-world sequences. This wastes optimization resources on low-density structural glues while leaving high-density logical pivot points severely under-optimized. To address this, we propose an Information Density Driven Smart Noise Scheduler. By extracting information-dense hubs and applying Complementary Priority Masking, our method decouples a single training instance into mutually reinforcing reasoning and syntax samples, forcing the model to master both logical deduction and foundational sequence structure. Experiments demonstrate that our approach improves average accuracy by ~4\% across four Code and Math reasoning benchmarks, significantly outperforming uniform baselines. Mechanistic analyses further reveal that probabilistic priority masking effectively mitigates contextual collapse during block diffusion training. Overall, this density-aware strategy efficiently unlocks the reasoning potential of diffusion language models at minimal annotation cost, emerging as a promising new masked data training paradigm for Diffusion LLMs. Our processed dataset can be found at https://huggingface.co/datasets/malr07/opc-sft-stage2-dense-extracted.