An Invariant Compiler for Neural ODEs in AI-Accelerated Scientific Simulation
arXiv cs.LG / 2026/3/26
💬 オピニオンIdeas & Deep AnalysisModels & Research
要点
- The paper argues that unconstrained neural ODEs can drift off physically valid regions by violating domain invariants (such as conservation laws), leading to implausible long-horizon forecasts in scientific simulations.
- It reviews prior approaches that enforce invariance via soft penalties/regularization, noting these can improve accuracy but still lack guarantees that trajectories stay on the admissible manifold.
- The authors propose the “invariant compiler,” which enforces invariants by construction by representing invariants as first-class types and compiling a generic neural ODE specification into a structure-preserving architecture.
- The workflow is described as LLM-driven compilation that separates invariants that must be preserved from learned dynamics that operate within the preserved scientific structure, yielding continuous-time admissible trajectories up to numerical error.
- The work is positioned as a systematic design pattern for building invariant-respecting neural surrogates across multiple scientific domains.



