AI Navigate

Scaling Laws and Pathologies of Single-Layer PINNs: Network Width and PDE Nonlinearity

arXiv cs.LG / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper establishes empirical scaling laws for Single-Layer PINNs on canonical nonlinear PDEs and identifies two optimization pathologies: a baseline one where increasing width fails to reduce error, and a compounding one where nonlinearity worsens this failure.
  • It shows that a simple separable power law is insufficient to describe scaling; the relationship is more complex and non-separable, consistent with spectral bias against high-frequency solution components that intensify with nonlinearity.
  • The authors argue that optimization, not approximation capacity, is the primary bottleneck in scaling PINNs and propose a methodology to empirically measure these complex scaling effects.
  • The results have implications for designing and training PINNs for nonlinear PDEs, highlighting where improvements in optimization strategies could yield better performance.

Abstract

We establish empirical scaling laws for Single-Layer Physics-Informed Neural Networks on canonical nonlinear PDEs. We identify a dual optimization failure: (i) a baseline pathology, where the solution error fails to decrease with network width, even at fixed nonlinearity, falling short of theoretical approximation bounds, and (ii) a compounding pathology, where this failure is exacerbated by nonlinearity. We provide quantitative evidence that a simple separable power law is insufficient, and that the scaling behavior is governed by a more complex, non-separable relationship. This failure is consistent with the concept of spectral bias, where networks struggle to learn the high-frequency solution components that intensify with nonlinearity. We show that optimization, not approximation capacity, is the primary bottleneck, and propose a methodology to empirically measure these complex scaling effects.