Learning to Emulate Chaos: Adversarial Optimal Transport Regularization

arXiv cs.LG / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a core challenge in data-driven emulation of chaotic dynamical systems: long-term prediction is theoretically infeasible due to sensitivity to initial conditions, so simple squared-error training can fail under noisy data.
  • It reviews and builds on prior approaches that regularize neural emulators to match statistical properties of chaotic attractors using handcrafted summary statistics and/or learned statistics from diverse trajectory data.
  • The authors propose a new family of adversarial optimal-transport-based training objectives that learn both high-quality summary statistics and a physically consistent emulator.
  • They provide theoretical analysis and experimental validation for two formulations: a Sinkhorn divergence (2-Wasserstein) version and a WGAN-style dual (1-Wasserstein) version.
  • Across multiple chaotic systems, including high-dimensional chaotic attractors, the proposed method improves long-term statistical fidelity of the learned emulators compared with prior approaches.

Abstract

Chaos arises in many complex dynamical systems, from weather to power grids, but is difficult to accurately model using data-driven emulators, including neural operator architectures. For chaotic systems, the inherent sensitivity to initial conditions makes exact long-term forecasts theoretically infeasible, meaning that traditional squared-error losses often fail when trained on noisy data. Recent work has focused on training emulators to match the statistical properties of chaotic attractors by introducing regularization based on handcrafted local features and summary statistics, as well as learned statistics extracted from a diverse dataset of trajectories. In this work, we propose a family of adversarial optimal transport objectives that jointly learn high-quality summary statistics and a physically consistent emulator. We theoretically analyze and experimentally validate a Sinkhorn divergence formulation (2-Wasserstein) and a WGAN-style dual formulation (1-Wasserstein). Our experiments across a variety of chaotic systems, including systems with high-dimensional chaotic attractors, show that emulators trained with our approach exhibit significantly improved long-term statistical fidelity.