Generalization Bounds for Physics-Informed Neural Networks for the Incompressible Navier-Stokes Equations

arXiv cs.LG / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper derives the first rigorous, upper bounds on the generalization error for unsupervised physics-informed neural networks (PINNs) approximating solutions to the incompressible Navier–Stokes equations using depth-2 neural networks.
  • The analysis is done by bounding the Rademacher complexity of the PINN risk, enabling a characterization of the generalization gap via kinematic viscosity and loss regularization parameters rather than explicit network width.
  • The resulting sample complexity bounds are dimension-independent, which is a strong theoretical advantage for high-dimensional fluid dynamics problems.
  • The authors argue that the bounds motivate novel activation functions for fluid-dynamics PINN solvers and provide empirical validation using the Taylor–Green vortex benchmark.

Abstract

This work establishes rigorous first-of-its-kind upper bounds on the generalization error for the method of approximating solutions to the (d+1)-dimensional incompressible Navier-Stokes equations by training depth-2 neural networks trained via the unsupervised Physics-Informed Neural Network (PINN) framework. This is achieved by bounding the Rademacher complexity of the PINN risk. For appropriately weight bounded net classes our derived generalization bounds do not explicitly depend on the network width and our framework characterizes the generalization gap in terms of the fluid's kinematic viscosity and loss regularization parameters. In particular, the resulting sample complexity bounds are dimension-independent. Our generalization bounds suggest using novel activation functions for solving fluid dynamics. We provide empirical validation of the suggested activation functions and the corresponding bounds on a PINN setup solving the Taylor-Green vortex benchmark.