Bias mitigation in graph diffusion models

arXiv cs.CV / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that common graph diffusion models suffer from bias caused by a mismatch between the forward diffusion perturbation distribution and the standard Gaussian start used in reverse sampling.
  • It further attributes degraded generation quality to the interaction of this reverse-starting bias with diffusion models’ inherent exposure bias.
  • To fix the reverse-starting bias, the authors design a Langevin sampling algorithm that sets a new reverse starting point aligned with the forward maximum perturbation distribution.
  • To mitigate exposure bias, the paper introduces a score-correction method based on a newly defined score difference.
  • The proposed approach requires no neural network architecture changes and is reported to achieve state-of-the-art results across multiple models, datasets, and tasks, with code released on GitHub.

Abstract

Most existing graph diffusion models have significant bias problems. We observe that the forward diffusion's maximum perturbation distribution in most models deviates from the standard Gaussian distribution, while reverse sampling consistently starts from a standard Gaussian distribution, which results in a reverse-starting bias. Together with the inherent exposure bias of diffusion models, this results in degraded generation quality. This paper proposes a comprehensive approach to mitigate both biases. To mitigate reverse-starting bias, we employ a newly designed Langevin sampling algorithm to align with the forward maximum perturbation distribution, establishing a new reverse-starting point. To address the exposure bias, we introduce a score correction mechanism based on a newly defined score difference. Our approach, which requires no network modifications, is validated across multiple models, datasets, and tasks, achieving state-of-the-art results.Code is at https://github.com/kunzhan/spp