On Divergence Measures for Training GFlowNets

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how different divergence measures (Renyi-$\alpha$, Tsallis-$\alpha$, and forward/reverse KL) should be used to train Generative Flow Networks (GFlowNets) so that the learned forward/backward policies satisfy flow-matching conditions.
  • It argues that naively applying standard KL minimization can yield biased and high-variance gradient estimators, motivating more statistically efficient divergence-specific objectives.
  • The authors design variance-reduced, statistically efficient stochastic gradient estimators for these divergences, using control variates derived from REINFORCE leave-one-out and score-matching techniques.
  • Experiments show that minimizing the proposed divergences produces provably correct training and often achieves significantly faster convergence than earlier GFlowNet optimization methods.
  • Overall, the work aligns GFlowNets training more closely with generalized variational inference by reframing training through a divergence-minimization viewpoint.

Abstract

Generative Flow Networks (GFlowNets) are amortized inference models designed to sample from unnormalized distributions over composable objects, with applications in generative modeling for tasks in fields such as causal discovery, NLP, and drug discovery. Traditionally, the training procedure for GFlowNets seeks to minimize the expected log-squared difference between a proposal (forward policy) and a target (backward policy) distribution, which enforces certain flow-matching conditions. While this training procedure is closely related to variational inference (VI), directly attempting standard Kullback-Leibler (KL) divergence minimization can lead to proven biased and potentially high-variance estimators. Therefore, we first review four divergence measures, namely, Renyi-\alpha's, Tsallis-\alpha's, reverse and forward KL's, and design statistically efficient estimators for their stochastic gradients in the context of training GFlowNets. Then, we verify that properly minimizing these divergences yields a provably correct and empirically effective training scheme, often leading to significantly faster convergence than previously proposed optimization. To achieve this, we design control variates based on the REINFORCE leave-one-out and score-matching estimators to reduce the variance of the learning objectives' gradients. Our work contributes by narrowing the gap between GFlowNets training and generalized variational approximations, paving the way for algorithmic ideas informed by the divergence minimization viewpoint.