An Invariant Compiler for Neural ODEs in AI-Accelerated Scientific Simulation

arXiv cs.LG / 2026/3/26

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper argues that unconstrained neural ODEs can drift off physically valid regions by violating domain invariants (such as conservation laws), leading to implausible long-horizon forecasts in scientific simulations.
  • It reviews prior approaches that enforce invariance via soft penalties/regularization, noting these can improve accuracy but still lack guarantees that trajectories stay on the admissible manifold.
  • The authors propose the “invariant compiler,” which enforces invariants by construction by representing invariants as first-class types and compiling a generic neural ODE specification into a structure-preserving architecture.
  • The workflow is described as LLM-driven compilation that separates invariants that must be preserved from learned dynamics that operate within the preserved scientific structure, yielding continuous-time admissible trajectories up to numerical error.
  • The work is positioned as a systematic design pattern for building invariant-respecting neural surrogates across multiple scientific domains.

Abstract

Neural ODEs are increasingly used as continuous-time models for scientific and sensor data, but unconstrained neural ODEs can drift and violate domain invariants (e.g., conservation laws), yielding physically implausible solutions. In turn, this can compound error in long-horizon prediction and surrogate simulation. Existing solutions typically aim to enforce invariance by soft penalties or other forms of regularization, which can reduce overall error but do not guarantee that trajectories will not leave the constraint manifold. We introduce the invariant compiler, a framework that enforces invariants by construction: it treats invariants as first-class types and uses an LLM-driven compilation workflow to translate a generic neural ODE specification into a structure-preserving architecture whose trajectories remain on the admissible manifold in continuous time (and up to numerical integration error in practice). This compiler view cleanly separates what must be preserved (scientific structure) from what is learned from data (dynamics within that structure). It provides a systematic design pattern for invariant-respecting neural surrogates across scientific domains.