Guiding Distribution Matching Distillation with Gradient-Based Reinforcement Learning
arXiv cs.LG / 4/22/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a key limitation of diffusion distillation methods like Distribution Matching Distillation (DMD): they can improve sampling speed but may degrade generation quality.
- It argues that simply combining reinforcement learning (RL) with distillation can produce unreliable and conflicting reward signals because raw sample evaluation is noisy and misaligned with the distillation trajectory.
- To fix this, the authors propose GDMD (Guiding Distribution Matching Distillation), which changes the reward mechanism to prioritize distillation gradients rather than raw pixel outputs.
- By reinterpreting DMD gradients as implicit target tensors, GDMD lets existing reward models evaluate the quality of distillation updates directly and uses gradient-level guidance as adaptive weighting to prevent optimization divergence.
- Experiments report a new state of the art in few-step generation, with 4-step models outperforming their multi-step teachers and beating prior DMD/R results across GenEval and human-preference metrics, with strong scalability potential.
![AI TikTok Marketing for Pet Brands [2026 Guide]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Fj35r9qm34d68qf2gq7no.png&w=3840&q=75)


