A Theoretical Framework for Statistical Evaluability of Generative Models

arXiv cs.LG / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a theoretical framework for understanding when and how generative models can be statistically evaluated using finite held-out i.i.d. test samples from the ground-truth distribution.
  • It shows that integral probability metrics (IPMs) can be estimated from finite samples with bounded additive/multiplicative approximation error, and with arbitrary precision when the test class has finite fat-shattering dimension.
  • The work argues that Rényi divergences and KL divergence are not reliably evaluable from finite samples because their estimates can be dominated by rare events.
  • It also examines perplexity as a potential evaluation metric, outlining both its usefulness and limitations for generative-model assessment.

Abstract

Statistical evaluation aims to estimate the generalization performance of a model using held-out i.i.d.\ test data sampled from the ground-truth distribution. In supervised learning settings such as classification, performance metrics such as error rate are well-defined, and test error reliably approximates population error given sufficiently large datasets. In contrast, evaluation is more challenging for generative models due to their open-ended nature: it is unclear which metrics are appropriate and whether such metrics can be reliably evaluated from finite samples. In this work, we introduce a theoretical framework for evaluating generative models and establish evaluability results for commonly used metrics. We study two categories of metrics: test-based metrics, including integral probability metrics (IPMs), and R\'enyi divergences. We show that IPMs with respect to any bounded test class can be evaluated from finite samples up to multiplicative and additive approximation errors. Moreover, when the test class has finite fat-shattering dimension, IPMs can be evaluated with arbitrary precision. In contrast, R\'enyi and KL divergences are not evaluable from finite samples, as their values can be critically determined by rare events. We also analyze the potential and limitations of perplexity as an evaluation method.