A Theory-guided Weighted $L^2$ Loss for solving the BGK model via Physics-informed neural networks

arXiv cs.LG / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard Physics-Informed Neural Network (PINN) training with an unweighted $L^2$ loss is inadequate for the BGK (Bhatnagar-Gross-Krook) model because it may not yield accurate macroscopic moments.
  • It proposes a theory-guided, velocity-weighted $L^2$ loss that increases the penalty for errors in high-velocity regions to better align the learned solution with physical behavior.
  • The authors derive a stability estimate and show that minimizing the proposed weighted loss guarantees convergence of the approximate solution.
  • Numerical experiments indicate that the weighted-loss PINN approach improves accuracy and robustness over multiple benchmarks compared with the standard loss formulation.

Abstract

While Physics-Informed Neural Networks offer a promising framework for solving partial differential equations, the standard L^2 loss formulation is fundamentally insufficient when applied to the Bhatnagar-Gross-Krook (BGK) model. Specifically, simply minimizing the standard loss does not guarantee accurate predictions of the macroscopic moments, causing the approximate solutions to fail in capturing the true physical solution. To overcome this limitation, we introduce a velocity-weighted L^2 loss function designed to effectively penalize errors in the high-velocity regions. By establishing a stability estimate for the proposed approach, we shows that minimizing the proposed weighted loss guarantees the convergence of the approximate solution. Also, numerical experiments demonstrate that employing this weighted PINN loss leads to superior accuracy and robustness across various benchmarks compared to the standard approach.