Graph-Informed Adversarial Modeling: Infimal Subadditivity of Interpolative Divergences

arXiv stat.ML / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies adversarial learning when the target distribution factorizes according to a known Bayesian network, enabling graph-informed modeling.
  • It proves an infimal subadditivity principle for interpolative divergences (including f, Gamma-divergences), showing a global variational discrepancy is bounded by an average of graph-aligned, family-level discrepancies under certain conditions.
  • In the additive regime, the surrogate is exact, providing a theoretical justification for using localized family-level discriminators in a graph-informed GAN rather than a single graph-agnostic discriminator.
  • The authors extend the results to integral probability metrics and proximal optimal transport, identify natural discriminator classes, and report experiments showing improved stability and structural recovery versus graph-agnostic baselines.

Abstract

We study adversarial learning when the target distribution factorizes according to a known Bayesian network. For interpolative divergences, including (f,\Gamma)-divergences, we prove a new infimal subadditivity principle showing that, under suitable conditions, a global variational discrepancy is controlled by an average of family-level discrepancies aligned with the graph. In an additive regime, this surrogate is exact. This provides a variational justification for replacing a graph-agnostic GAN with a monolithic discriminator by a graph-informed GAN with localized family-level discriminators. The result does not require the optimizer itself to factorize according to the graph. We also obtain parallel results for integral probability metrics and proximal optimal transport divergences, identify natural discriminator classes for which the theory applies, and present experiments showing improved stability and structural recovery relative to graph-agnostic baselines.