DMGD: Train-Free Dataset Distillation with Semantic-Distribution Matching in Diffusion Models

arXiv cs.CV / 5/6/2026

📰 NewsModels & Research

Key Points

  • The paper introduces DMGD (Dual Matching Guided Diffusion), a diffusion-based dataset distillation framework designed to provide effective guidance without additional training or fine-tuning stages.
  • It performs Semantic-Distribution Matching using conditional likelihood optimization to achieve semantic alignment without relying on auxiliary classifiers.
  • A dynamic guidance mechanism is proposed to increase the diversity of synthetic datasets while preserving semantic consistency with the target data.
  • The method also uses optimal transport (OT) to better match the structure of the target distribution, supported by efficient approximations (Distribution Approximate Matching) and staged computation (Greedy Progressive Matching).
  • Experiments on ImageNet-Woof, ImageNet-Nette, and ImageNet-1K show that the training-free approach improves over state-of-the-art methods that require extra fine-tuning, with average accuracy gains of 2.1%, 5.4%, and 2.4% respectively.

Abstract

Dataset distillation enables efficient training by distilling the information of large-scale datasets into significantly smaller synthetic datasets. Diffusion based paradigms have emerged in recent years, offering novel perspectives for dataset distillation. However, they typically necessitate additional fine-tuning stages, and effective guidance mechanisms remain underexplored. To address these limitations, we rethink diffusion based dataset distillation and propose a Dual Matching Guided Diffusion (DMGD) framework, centered on efficient training-free guidance. We first establish Semantic Matching via conditional likelihood optimization, eliminating the need for auxiliary classifiers. Furthermore, we propose a dynamic guidance mechanism that enhances the diversity of synthetic data while maintaining semantic alignment. Simultaneously, we introduce an optimal transport (OT) based Distribution Matching approach to further align with the target distribution structure. To ensure efficiency, we develop two enhanced strategies for diffusion based framework: Distribution Approximate Matching and Greedy Progressive Matching. These strategies enable effective distribution matching guidance with minimal computational overhead. Experimental results on ImageNet-Woof, ImageNet-Nette, and ImageNet-1K demonstrate that our training-free approach achieves significant improvements, outperforming state-of-the-art (SOTA) methods requiring additional fine-tuning by average accuracy gains of 2.1%, 5.4%, and 2.4%, respectively.