AI Navigate

Towards Infinitely Long Neural Simulations: Self-Refining Neural Surrogate Models for Dynamical Systems

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • They formalize a unifying mathematical framework that makes the tradeoff between short-time fidelity and long-time consistency explicit for autoregressive neural surrogates used in dynamical system simulations.
  • They propose a robust, hyperparameter-free Self-refining Neural Surrogate (SNS) implemented as a conditional diffusion model that balances short-time fidelity with long-time consistency by construction.
  • SNS can be deployed as a standalone model that refines its own autoregressive outputs or as a complementary module to existing surrogates to enforce long-time consistency, with numerical feasibility demonstrated on complex systems over arbitrarily long time horizons.
  • The work suggests that this approach preserves the speedups of neural surrogates (orders of magnitude faster) while mitigating distribution drift, enabling robust long-horizon simulations.

Abstract

Recent advances in autoregressive neural surrogate models have enabled orders-of-magnitude speedups in simulating dynamical systems. However, autoregressive models are generally prone to distribution drift: compounding errors in autoregressive rollouts that severely degrade generation quality over long time horizons. Existing work attempts to address this issue by implicitly leveraging the inherent trade-off between short-time accuracy and long-time consistency through hyperparameter tuning. In this work, we introduce a unifying mathematical framework that makes this tradeoff explicit, formalizing and generalizing hyperparameter-based strategies in existing approaches. Within this framework, we propose a robust, hyperparameter-free model implemented as a conditional diffusion model that balances short-time fidelity with long-time consistency by construction. Our model, Self-refining Neural Surrogate model (SNS), can be implemented as a standalone model that refines its own autoregressive outputs or as a complementary model to existing neural surrogates to ensure long-time consistency. We also demonstrate the numerical feasibility of SNS through high-fidelity simulations of complex dynamical systems over arbitrarily long time horizons.