AI Navigate

Q-Drift: Quantization-Aware Drift Correction for Diffusion Model Sampling

arXiv cs.CV / 3/20/2026

💬 OpinionIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • Q-Drift introduces a sampler-side drift correction for diffusion models under post-training quantization, modeling quantization error as an implicit stochastic perturbation to each denoising step and deriving a marginal-distribution-preserving drift adjustment.
  • The method estimates a timestep-wise variance statistic from calibration, requiring as few as five paired full-precision and quantized runs.
  • It is plug-and-play with common samplers (Euler, flow-matching, DPM-Solver++) and PTQ methods (SVDQuant, MixDQ), incurring negligible overhead at inference.
  • Empirical results across six text-to-image models, three samplers, and two PTQ methods show FID improvements over quantized baselines in most settings, with up to 4.59 FID reduction on PixArt-Sigma (SVDQuant W3A4).
  • The approach preserves CLIP scores, indicating maintained image-language alignment while mitigating quantization-induced degradation.

Abstract

Post-training quantization (PTQ) is a practical path to deploy large diffusion models, but quantization noise can accumulate over the denoising trajectory and degrade generation quality. We propose Q-Drift, a principled sampler-side correction that treats quantization error as an implicit stochastic perturbation on each denoising step and derives a marginal-distribution-preserving drift adjustment. Q-Drift estimates a timestep-wise variance statistic from calibration, in practice requiring as few as 5 paired full-precision/quantized calibration runs. The resulting sampler correction is plug-and-play with common samplers, diffusion models, and PTQ methods, while incurring negligible overhead at inference. Across six diverse text-to-image models (spanning DiT and U-Net), three samplers (Euler, flow-matching, DPM-Solver++), and two PTQ methods (SVDQuant, MixDQ), Q-Drift improves FID over the corresponding quantized baseline in most settings, with up to 4.59 FID reduction on PixArt-Sigma (SVDQuant W3A4), while preserving CLIP scores.