Linear-Nonlinear Fusion Neural Operator for Partial Differential Equations

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a new neural-operator architecture, the Linear-Nonlinear Fusion Neural Operator (LNF-NO), to learn direct mappings from PDE parameters to solution spaces for faster inference than traditional numerical solvers.
  • LNF-NO improves learning efficiency by explicitly decoupling linear and nonlinear effects, combining them via multiplicative fusion to create a lightweight and more interpretable representation.
  • The method supports multiple functional inputs and can operate on both regular grids and irregular geometries, widening applicability to real-world PDE problems.
  • Experiments across several PDE operator-learning benchmarks (including nonlinear Poisson-Boltzmann and multi-physics coupled systems) show LNF-NO typically trains faster than DeepONet and FNO while matching or exceeding accuracy.
  • In a 3D Poisson-Boltzmann benchmark, LNF-NO reports the best accuracy among compared models and about 2.7× faster training than a 3D FNO baseline.

Abstract

Neural operator learning directly constructs the mapping relationship from the equation parameter space to the solution space, enabling efficient direct inference in practical applications without the need for repeated solution of partial differential equations (PDEs) - an advantage that is difficult to achieve with traditional numerical methods. In this work, we find that explicitly decoupling linear and nonlinear effects within such operator mappings leads to markedly improved learning efficiency. This yields a novel network structure, namely the Linear-Nonlinear Fusion Neural Operator (LNF-NO), which models operator mappings via the multiplicative fusion of a linear component and a nonlinear component, thus achieving a lightweight and interpretable representation. This linear-nonlinear decoupling enables efficient capture of complex solution features at the operator level while maintaining stability and generality. LNF-NO naturally supports multiple functional inputs and is applicable to both regular grids and irregular geometries. Across a diverse suite of PDE operator-learning benchmarks, including nonlinear Poisson-Boltzmann equations and multi-physics coupled systems, LNF-NO is typically substantially faster to train than Deep Operator Networks (DeepONet) and Fourier Neural Operators (FNO), while achieving comparable or better accuracy in most cases. On the tested 3D Poisson-Boltzmann case, LNF-NO attains the best accuracy among the compared models and trains approximately 2.7x faster than a 3D FNO baseline.