AI Navigate

Inverse Neural Operator for ODE Parameter Optimization

arXiv cs.LG / 3/13/2026

📰 NewsModels & Research

Key Points

  • The paper introduces Inverse Neural Operator (INO), a two-stage framework to recover hidden ODE parameters from sparse observations.
  • Stage 1 uses a Conditional Fourier Neural Operator with cross-attention to reconstruct full ODE trajectories from sparse inputs, employing spectral regularization to suppress high-frequency artifacts.
  • Stage 2 uses an Amortized Drifting Model that learns a kernel-weighted velocity field in parameter space to transport random parameter initializations toward the ground truth without backpropagating through the surrogate, avoiding Jacobian instabilities in stiff regimes.
  • Experiments on a real-world stiff atmospheric chemistry benchmark (POLLU, 25 parameters) and a synthetic Gene Regulatory Network (GRN, 40 parameters) show INO outperforms gradient-based and amortized baselines in parameter recovery accuracy.
  • Inference time is 0.23s, representing a 487x speedup over iterative gradient descent.

Abstract

We propose the Inverse Neural Operator (INO), a two-stage framework for recovering hidden ODE parameters from sparse, partial observations. In Stage 1, a Conditional Fourier Neural Operator (C-FNO) with cross-attention learns a differentiable surrogate that reconstructs full ODE trajectories from arbitrary sparse inputs, suppressing high-frequency artifacts via spectral regularization. In Stage 2, an Amortized Drifting Model (ADM) learns a kernel-weighted velocity field in parameter space, transporting random parameter initializations toward the ground truth without backpropagating through the surrogate, avoiding the Jacobian instabilities that afflict gradient-based inversion in stiff regimes. Experiments on a real-world stiff atmospheric chemistry benchmark (POLLU, 25 parameters) and a synthetic Gene Regulatory Network (GRN, 40 parameters) show that INO outperforms gradient-based and amortized baselines in parameter recovery accuracy while requiring only 0.23s inference time, a 487x speedup over iterative gradient descent.