Stochastic-Dimension Frozen Sampled Neural Network for High-Dimensional Gross-Pitaevskii Equations on Unbounded Domains

arXiv cs.LG / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a stochastic-dimension frozen sampled neural network (SD-FSNN) aimed at solving high-dimensional Gross-Pitaevskii equations (GPEs) defined on unbounded spatial domains.
  • SD-FSNN is designed to be unbiased across dimensions and to keep computational cost independent of dimensionality, avoiding the exponential scaling typical of Hermite-basis discretizations.
  • By randomly sampling hidden layer weights and biases, the method reduces reliance on slow iterative gradient-based training and achieves improved training time and accuracy.
  • A space-time separation approach is combined with adaptive ODE solvers to update evolution coefficients while maintaining temporal causality in the learned dynamics.
  • The network incorporates physics-informed components—Gaussian-weighted ansatz for correct decay at infinity, a normalization projection layer for mass normalization, and an energy conservation constraint to limit long-time numerical dissipation—showing strong comparative performance on accuracy and efficiency.

Abstract

In this paper, we propose a stochastic-dimension frozen sampled neural network (SD-FSNN) for solving a class of high-dimensional Gross-Pitaevskii equations (GPEs) on unbounded domains. SD-FSNN is unbiased across all dimensions, and its computational cost is independent of the dimension, avoiding the exponential growth in computational and memory costs associated with Hermite-basis discretizations. Additionally, we randomly sample the hidden weights and biases of the neural network, significantly outperforming iterative, gradient-based optimization methods in terms of training time and accuracy. Furthermore, we employ a space-time separation strategy, using adaptive ordinary differential equation (ODE) solvers to update the evolution coefficients and incorporate temporal causality. To preserve the structure of the GPEs, we integrate a Gaussian-weighted ansatz into the neural network to enforce exponential decay at infinity, embed a normalization projection layer for mass normalization, and add an energy conservation constraint to mitigate long-time numerical dissipation. Comparative experiments with existing methods demonstrate the superior performance of SD-FSNN across a range of spatial dimensions and interaction parameters. Compared to existing random-feature methods, SD-FSNN reduces the complexity from linear to dimension-independent. Additionally, SD-FSNN achieves better accuracy and faster training compared to general high-dimensional solvers, while focusing specifically on high-dimensional GPEs on unbounded domains.