AI Navigate

Epistemic Generative Adversarial Networks

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper generalizes the GAN loss using Dempster-Shafer theory for both generator and discriminator, aiming to improve training dynamics and output quality.
  • It adds a generator-side architectural enhancement that predicts a mass function per image pixel, enabling explicit uncertainty quantification in outputs.
  • By leveraging this uncertainty, the method achieves greater generation diversity and more representative samples compared to standard GANs.
  • Experimental results demonstrate improved variability and provide a principled probabilistic framework for modeling and interpreting uncertainty in generative processes.

Abstract

Generative models, particularly Generative Adversarial Networks (GANs), often suffer from a lack of output diversity, frequently generating similar samples rather than a wide range of variations. This paper introduces a novel generalization of the GAN loss function based on Dempster-Shafer theory of evidence, applied to both the generator and discriminator. Additionally, we propose an architectural enhancement to the generator that enables it to predict a mass function for each image pixel. This modification allows the model to quantify uncertainty in its outputs and leverage this uncertainty to produce more diverse and representative generations. Experimental evidence shows that our approach not only improves generation variability but also provides a principled framework for modeling and interpreting uncertainty in generative processes.