Coordinate Encoding on Linear Grids for Physics-Informed Neural Networks

arXiv cs.LG / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses training difficulties in physics-informed neural networks (PINNs) for solving PDEs, attributing slow convergence to a spectral bias problem.
  • It proposes adding a coordinate-encoding layer that uses axis-independent linear grid cells to improve convergence by separating local domains.
  • The method interpolates encoded coordinates between grid points with natural cubic splines to ensure continuous derivatives required for PDE loss computations.
  • Numerical experiments reported in the study indicate improved training convergence speed and stable, efficient model performance compared with baseline approaches.

Abstract

In solving partial differential equations (PDEs), machine learning utilizing physical laws has received considerable attention owing to advantages such as mesh-free solutions, unsupervised learning, and feasibility for solving high-dimensional problems. An effective approach is based on physics-informed neural networks (PINNs), which are based on deep neural networks known for their excellent performance in various academic and industrial applications. However, PINNs struggled with model training owing to significantly slow convergence because of a spectral bias problem. In this study, we propose a PINN-based method equipped with a coordinate-encoding layer on linear grid cells. The proposed method improves the training convergence speed by separating the local domains using grid cells. Moreover, it reduces the overall computational cost by using axis-independent linear grid cells. The method also achieves efficient and stable model training by adequately interpolating the encoded coordinates between grid points using natural cubic splines, which guarantees continuous derivative functions of the model computed for the loss functions. The results of numerical experiments demonstrate the effective performance and efficient training convergence speed of the proposed method.