Robust Learning of Heterogeneous Dynamic Systems

arXiv cs.LG / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how to learn shared patterns across multiple heterogeneous dynamical systems modeled by ODEs, addressing a gap in existing single-system ODE learning methods.
  • It proposes a distributionally robust learning framework that builds a robust ODE by maximizing a worst-case reward over an uncertainty set defined via convex combinations of trajectory derivatives.
  • The authors derive an explicit weighted-average estimator whose weights come from a quadratic optimization designed to balance information across different data sources.
  • To mitigate potential instability, the paper introduces a bi-level stabilization procedure, and provides theoretical guarantees including consistency of the stabilized weights, robust trajectory error bounds, and asymptotic validity of pointwise confidence intervals.
  • Experiments and analysis (including intracranial EEG data) show improved generalization performance over alternative approaches through simulations and real-data evaluation.

Abstract

Ordinary differential equations (ODEs) provide a powerful framework for modeling dynamic systems arising in a wide range of scientific domains. However, most existing ODE methods focus on a single system, and do not adequately address the problem of learning shared patterns from multiple heterogeneous dynamic systems. In this article, we propose a novel distributionally robust learning approach for modeling heterogeneous ODE systems. Specifically, we construct a robust dynamic system by maximizing a worst-case reward over an uncertainty class formed by convex combinations of the derivatives of trajectories. We show the resulting estimator admits an explicit weighted average representation, where the weights are obtained from a quadratic optimization that balances information across multiple data sources. We further develop a bi-level stabilization procedure to address potential instability in estimation. We establish rigorous theoretical guarantees for the proposed method, including consistency of the stabilized weights, error bound for robust trajectory estimation, and asymptotical validity of pointwise confidence interval. We demonstrate that the proposed method considerably improves the generalization performance compared to the alternative solutions through both extensive simulations and the analysis of an intracranial electroencephalogram data.