AI Navigate

Deep Invertible Autoencoders for Dimensionality Reduction of Dynamical Systems

arXiv cs.LG / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper introduces a deep invertible autoencoder (inv-AE) built from invertible neural network layers that gradually recovers more information as the reduced manifold dimension grows.
  • Inv-AE mitigates the projection-error plateau common to traditional autoencoders and improves reconstruction quality for reduced-order models.
  • The method can be integrated with popular projection-based ROM approaches to boost accuracy.
  • The authors demonstrate inv-AE on a parametric 1D Burgers' equation and a parametric 2D flow around an obstacle with variable geometry, showing improved performance.
  • This approach addresses limitations of POD-based and AE-based ROMs in transport- and advection-dominated regimes where singular-value decay is slow.

Abstract

Constructing reduced-order models (ROMs) capable of efficiently predicting the evolution of high-dimensional, parametric systems is crucial in many applications in engineering and applied sciences. A popular class of projection-based ROMs projects the high-dimensional full-order model (FOM) dynamics onto a low-dimensional manifold. These projection-based ROMs approaches often rely on classical model reduction techniques such as proper orthogonal decomposition (POD) or, more recently, on neural network architectures such as autoencoders (AEs). In the case that the ROM is constructed by the POD, one has approximation guaranteed based based on the singular values of the problem at hand. However, POD-based techniques can suffer from slow decay of the singular values in transport- and advection-dominated problems. In contrast to that, AEs allow for better reduction capabilities than the POD, often with the first few modes, but at the price of theoretical considerations. In addition, it is often observed, that AEs exhibits a plateau of the projection error with the increment of the dimension of the trial manifold. In this work, we propose a deep invertible AE architecture, named inv-AE, that improves upon the stagnation of the projection error typical of traditional AE architectures, e.g., convolutional, and the reconstructions quality. Inv-AE is composed of several invertible neural network layers that allows for gradually recovering more information about the FOM solutions the more we increase the dimension of the reduced manifold. Through the application of inv-AE to a parametric 1D Burgers' equation and a parametric 2D fluid flow around an obstacle with variable geometry, we show that (i) inv-AE mitigates the issue of the characteristic plateau of (convolutional) AEs and (ii) inv-AE can be combined with popular projection-based ROM approaches to improve their accuracy.