Learning to Test: Physics-Informed Representation for Dynamical Instability Detection

arXiv cs.LG / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses safety-critical dynamical systems modeled as differential-algebraic equations (DAEs), where stability must be re-evaluated under shifting stochastic environmental conditions (distribution shift).
  • It proposes a test-oriented learning framework that avoids repeated expensive DAE simulations and parameter re-estimation by learning a physics-informed latent representation of contextual variables.
  • The latent representation is trained on baseline data from a certified safe regime and regularized toward a tractable reference distribution to support deployment-time safety monitoring.
  • Instability detection is cast as a distributional hypothesis test in latent space with controlled Type I error, providing a statistically grounded mechanism for risk detection.
  • The method integrates neural dynamical surrogates, uncertainty-aware calibration, and uniformity-based testing to scale to high-dimensional or real-time settings.

Abstract

Many safety-critical scientific and engineering systems evolve according to differential-algebraic equations (DAEs), where dynamical behavior is constrained by physical laws and admissibility conditions. In practice, these systems operate under stochastically varying environmental inputs, so stability is not a static property but must be reassessed as the context distribution shifts. Repeated large-scale DAE simulation, however, is computationally prohibitive in high-dimensional or real-time settings. This paper proposes a test-oriented learning framework for stability assessment under distribution shift. Rather than re-estimating physical parameters or repeatedly solving the underlying DAE, we learn a physics-informed latent representation of contextual variables that captures stability-relevant structure and is regularized toward a tractable reference distribution. Trained on baseline data from a certified safe regime, the learned representation enables deployment-time safety monitoring to be formulated as a distributional hypothesis test in latent space, with controlled Type I error. By integrating neural dynamical surrogates, uncertainty-aware calibration, and uniformity-based testing, our approach provides a scalable and statistically grounded method for detecting instability risk in stochastic constrained dynamical systems without repeated simulation.