AI Navigate

CRAFT: Aligning Diffusion Models with Fine-Tuning Is Easier Than You Think

arXiv cs.CV / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • CRAFT presents a lightweight fine-tuning paradigm that reduces training data requirements and computational cost for aligning diffusion models.
  • It combines Composite Reward Filtering to curate a high-quality dataset with an enhanced supervised fine-tuning step.
  • The authors prove that CRAFT optimizes a lower bound of group-based reinforcement learning, linking data-selected SFT to RL theory.
  • Empirically, CRAFT with 100 samples outperforms state-of-the-art preference optimization methods and converges 11-220x faster than baselines.

Abstract

Aligning Diffusion models has achieved remarkable breakthroughs in generating high-quality, human preference-aligned images. Existing techniques, such as supervised fine-tuning (SFT) and DPO-style preference optimization, have become principled tools for fine-tuning diffusion models. However, SFT relies on high-quality images that are costly to obtain, while DPO-style methods depend on large-scale preference datasets, which are often inconsistent in quality. Beyond data dependency, these methods are further constrained by computational inefficiency. To address these two challenges, we propose Composite Reward Assisted Fine-Tuning (CRAFT), a lightweight yet powerful fine-tuning paradigm that requires significantly reduced training data while maintaining computational efficiency. It first leverages a Composite Reward Filtering (CRF) technique to construct a high-quality and consistent training dataset and then perform an enhanced variant of SFT. We also theoretically prove that CRAFT actually optimizes the lower bound of group-based reinforcement learning, establishing a principled connection between SFT with selected data and reinforcement learning. Our extensive empirical results demonstrate that CRAFT with only 100 samples can easily outperform recent SOTA preference optimization methods with thousands of preference-paired samples. Moreover, CRAFT can even achieve 11-220\times faster convergences than the baseline preference optimization methods, highlighting its extremely high efficiency.