AI Navigate

Variance-Aware Adaptive Weighting for Diffusion Model Training

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies training imbalance across log-SNR noise levels in diffusion models due to loss variance, which can hamper optimization and stability.
  • It proposes a variance-aware adaptive weighting strategy that dynamically adjusts training weights based on observed variance across noise levels to balance optimization.
  • Experiments on CIFAR-10 and CIFAR-100 show improved generative performance (lower FID) and reduced seed-by-seed variance compared to standard training.
  • Additional analyses such as loss-log-SNR visualizations and variance heatmaps suggest the approach stabilizes training dynamics and demonstrates the value of variance-aware training for diffusion models.

Abstract

Diffusion models have recently achieved remarkable success in generative modeling, yet their training dynamics across different noise levels remain highly imbalanced, which can lead to inefficient optimization and unstable learning behavior. In this work, we investigate this imbalance from the perspective of loss variance across log-SNR levels and propose a variance-aware adaptive weighting strategy to address it. The proposed approach dynamically adjusts training weights based on the observed variance distribution, encouraging a more balanced optimization process across noise levels. Extensive experiments on CIFAR-10 and CIFAR-100 demonstrate that the proposed method consistently improves generative performance over standard training schemes, achieving lower Fr\'echet Inception Distance (FID) while also reducing performance variance across random seeds. Additional analysis, including loss-log-SNR visualization, variance heatmaps, and ablation studies, further reveal that the adaptive weighting effectively stabilizes training dynamics. These results highlight the potential of variance-aware training strategies for improving diffusion model optimization.