Regularity of Solutions to Beckmann's Parametric Optimal Transport

arXiv stat.ML / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper develops a regularity theory for Beckmann's problem in optimal transport using an unconstrained Lagrangian formulation and variational first-order optimality conditions.
  • It shows that the Lagrange multiplier enforcing the divergence constraint satisfies a Poisson equation and the transport flux is the gradient of a potential.
  • Using Schauder elliptic regularity, it derives exact Hölder regularity for the potential, flux, and generated flow under Hölder-continuous source and target densities on a bounded, regular domain.
  • For parameter-dependent targets (as in conditional generative learning), it provides sufficient conditions for separate and joint Hölder continuity of the resulting vector field in both parameter and data dimensions.
  • The work notes that such vector fields can be approximated by deep ReQu neural networks in Hölder norm and generalizes to other probability paths like Fisher-Rao gradient flows.

Abstract

Beckmann's problem in optimal transport minimizes the total squared flux in a continuous transport problem from a source to a target distribution. In this article, the regularity theory for solutions to Beckmann's problem in optimal transport is developed utilizing an unconstrained Lagrangian formulation and solving the variational first order optimality conditions. It turns out that the Lagrangian multiplier that enforces Beckmann's divergence constraint fulfills a Poisson equation and the flux vector field is obtained as the potential's gradient. Utilizing Schauder estimates from elliptic regularity theory, the exact H\"older regularity of the potential, the flux and the flow generating is derived on the basis of H\"older regularity of source and target densities on a bounded, regular domain. If the target distribution depends on parameters, as is the case in conditional (``promptable'') generative learning, we provide sufficient conditions for separate and joint H\"older continuity of the resulting vector field in the parameter and the data dimension. Following a recent result by Belomnestny et al., one can thus approximate such vector fields with deep ReQu neural networks in C^(k,alpha)-H\"older norm. We also show that this approach generalizes to other probability paths, like Fisher-Rao gradient flows.