Deep Hilbert--Galerkin Methods for Infinite-Dimensional PDEs and Optimal Control

arXiv cs.LG / 3/23/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • They introduce Hilbert-Galerkin Neural Operators (HGNOs) to parameterize solutions of fully nonlinear second-order PDEs on separable Hilbert spaces, enabling approximation of complex terms like Hessians and unbounded operators.
  • They establish universal approximation theorems for functions on Hilbert spaces (and their Fréchet derivatives up to second order) under novel, non-sequential, non-metrizable topologies, providing a theoretical foundation for HGNOs.
  • They propose Deep Hilbert-Galerkin and Hilbert Actor-Critic training frameworks that minimize the L^2_mu(H) residual of the PDE across the entire Hilbert space, rather than finite-dimensional projections.
  • They demonstrate the approach on Kolmogorov and HJB PDEs related to optimal control of deterministic and stochastic heat and Burgers' equations, illustrating potential for infinite-dimensional control problems and related SPDEs.

Abstract

We develop deep learning-based approximation methods for fully nonlinear second-order PDEs on separable Hilbert spaces, such as HJB equations for infinite-dimensional control, by parameterizing solutions via Hilbert--Galerkin Neural Operators (HGNOs). We prove the first Universal Approximation Theorems (UATs) which are sufficiently powerful to address these problems, based on novel topologies for Hessian terms and corresponding novel continuity assumptions on the fully nonlinear operator. These topologies are non-sequential and non-metrizable, making the problem delicate. In particular, we prove UATs for functions on Hilbert spaces, together with their Fr\'echet derivatives up to second order, and for unbounded operators applied to the first derivative, ensuring that HGNOs are able to approximate all the PDE terms. For control problems, we further prove UATs for optimal feedback controls in terms of our approximating value function HGNO. We develop numerical training methods, which we call Deep Hilbert--Galerkin and Hilbert Actor-Critic (reinforcement learning) Methods, for these problems by minimizing the L^2_\mu(H)-norm of the residual of the PDE on the whole Hilbert space, not just a projected PDE to finite dimensions. This is the first paper to propose such an approach. The models considered arise in many applied sciences, such as functional differential equations in physics and Kolmogorov and HJB PDEs related to controlled PDEs, SPDEs, path-dependent systems, partially observed stochastic systems, and mean-field SDEs. We numerically solve examples of Kolmogorov and HJB PDEs related to the optimal control of deterministic and stochastic heat and Burgers' equations, demonstrating the promise of our deep learning-based approach.