Weak-PDE-Net: Discovering Open-Form PDEs via Differentiable Symbolic Networks and Weak Formulation

arXiv cs.LG / 3/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Weak-PDE-Net is presented as an end-to-end differentiable framework to discover governing Partial Differential Equations (PDEs) from sparse and noisy observations, addressing instability from numerical differentiation and limited flexibility in candidate libraries.
  • The method combines a forward response learner (using learnable Gaussian kernels with a lightweight MLP) with a weak-form PDE generator that uses symbolic networks plus an integral module to avoid explicit numerical differentiation.
  • To broaden beyond a fixed library of candidate terms, the approach applies Differentiable Neural Architecture Search during training to explore the functional space for open-form PDE identification.
  • For improved physical consistency in multivariable systems, it incorporates Galilean invariance constraints and symmetry equivariance assumptions into the learning process.
  • The authors report that experiments on multiple PDE benchmarks show accurate recovery of governing equations even under highly sparse and noisy data conditions.

Abstract

Discovering governing Partial Differential Equations (PDEs) from sparse and noisy data is a challenging issue in data-driven scientific computing. Conventional sparse regression methods often suffer from two major limitations: (i) the instability of numerical differentiation under sparse and noisy data, and (ii) the restricted flexibility of a pre-defined candidate library. We propose Weak-PDE-Net, an end-to-end differentiable framework that can robustly identify open-form PDEs. Weak-PDE-Net consists of two interconnected modules: a forward response learner and a weak-form PDE generator. The learner embeds learnable Gaussian kernels within a lightweight MLP, serving as a surrogate model that adaptively captures system dynamics from sparse observations. Meanwhile, the generator integrates a symbolic network with an integral module to construct weak-form PDEs, avoiding explicit numerical differentiation and improving robustness to noise. To relax the constraints of the pre-defined library, we leverage Differentiable Neural Architecture Search strategy during training to explore the functional space, which enables the efficient discovery of open-form PDEs. The capability of Weak-PDE-Net in multivariable systems discovery is further enhanced by incorporating Galilean Invariance constraints and symmetry equivariance hypotheses to ensure physical consistency. Experiments on several challenging PDE benchmarks demonstrate that Weak-PDE-Net accurately recovers governing equations, even under highly sparse and noisy observations.

Weak-PDE-Net: Discovering Open-Form PDEs via Differentiable Symbolic Networks and Weak Formulation | AI Navigate