Generative models for decision-making under distributional shift

arXiv cs.LG / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The tutorial argues that distributional shift in real deployments (e.g., shifted, context-dependent, partially observed, or stress-induced distributions) can be addressed by building decision-relevant distributions rather than relying only on a nominal historical estimate.
  • It presents flow- and score-based generative models as mathematical tools for representing and transforming distributions via transport maps, velocity/score fields, guided stochastic dynamics, and pushforward/continuity formulations.
  • The framework connects generative modeling to operations research concepts using Fokker–Planck equations, Wasserstein geometry, and optimization in probability space, enabling robust scenario construction.
  • It shows how these generative models can learn nominal uncertainty, derive stressed or least-favorable distributions for robustness, and generate conditional/posterior distributions under side information and partial observation.
  • The article highlights theoretical results and guarantees (e.g., convergence for iterative flow models, minimax analysis in transport-map space, and error-transfer bounds for posterior sampling with generative priors).

Abstract

Many data-driven decision problems are formulated using a nominal distribution estimated from historical data, while performance is ultimately determined by a deployment distribution that may be shifted, context-dependent, partially observed, or stress-induced. This tutorial presents modern generative models, particularly flow- and score-based methods, as mathematical tools for constructing decision-relevant distributions. From an operations research perspective, their primary value lies not in unconstrained sample synthesis but in representing and transforming distributions through transport maps, velocity fields, score fields, and guided stochastic dynamics. We present a unified framework based on pushforward maps, continuity, Fokker-Planck equations, Wasserstein geometry, and optimization in probability space. Within this framework, generative models can be used to learn nominal uncertainty, construct stressed or least-favorable distributions for robustness, and produce conditional or posterior distributions under side information and partial observation. We also highlight representative theoretical guarantees, including forward-reverse convergence for iterative flow models, first-order minimax analysis in transport-map space, and error-transfer bounds for posterior sampling with generative priors. The tutorial provides a principled introduction to using generative models for scenario generation, robust decision-making, uncertainty quantification, and related problems under distributional shift.