Simple Self-Conditioning Adaptation for Masked Diffusion Models

arXiv cs.LG / 5/1/2026

📰 NewsModels & Research

Key Points

  • Masked diffusion models typically re-infer still-masked token positions using only the mask token, which prevents effective cross-step refinement.
  • The paper introduces Self-Conditioned Masked Diffusion Models (SCMDM), a post-training adaptation that conditions each denoising step on the model’s own previous clean-state predictions.
  • SCMDM can be applied with minimal architectural changes, adds no extra denoiser evaluations during sampling, and does not require an auxiliary reference model or recurrent latent pathway.
  • Experiments across multiple domains show consistent gains over vanilla masked diffusion, including nearly a 50% reduction in generative perplexity on OWT-trained models and improved quality in discretized image synthesis, small molecular generation, and genomic distribution modeling.

Abstract

Masked diffusion models (MDMs) generate discrete sequences by iterative denoising under an absorbing masking process. In standard masked diffusion, if a token remains masked after a reverse update, the model discards its clean-state prediction for that position. Thus, still-masked positions must be repeatedly inferred from the mask token alone. This design choice limits cross-step refinement. To address this limitation, this paper proposes a simple, yet effective, post-training adaptation for MDMs that conditions each denoising step on the model's own previous clean-state predictions. The resulting method, called Self-Conditioned Masked Diffusion Models (SCMDM), requires minimal architectural change, does not introduce a recurrent latent-state pathway, does not rely on an auxiliary reference model, and adds no extra denoiser evaluations during sampling. This is an important departure from partial self-conditioning approaches which requires expensive model training from scratch. In particular, the paper shows that partial self-conditioning, including the commonly used 50% dropout strategy for training self-conditioned models from scratch, is suboptimal in the post-training regime. Instead, once the model's self-generated clean-state estimates become informative, the specialization to refinement is preferable to mixing conditional and unconditional objectives. SCMDM is evaluated across multiple domains, demonstrating consistent improvement over vanilla MDM baselines, achieving nearly a 50% reduction in generative perplexity on OWT-trained models (42.89 to 23.72), alongside strong improvements in discretized image synthesis quality, small molecular generation, and enhanced fidelity in genomic distribution modeling.