AI Navigate

Slack More, Predict Better: Proximal Relaxation for Probabilistic Latent Variable Model-based Soft Sensors

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisIndustry & Market MovesModels & Research

Key Points

  • KProxNPLVM introduces a novel probabilistic latent variable model that relaxes the learning objective by using the Wasserstein distance as a proximal operator to improve soft sensor performance.
  • The work identifies that conventional amortized variational inference with neural-network parameterization can incur an approximation error due to finite-dimensional optimization.
  • It provides a rigorous optimization derivation, proves convergence, and shows how the relaxation can sidestep the approximation gap.
  • Extensive experiments on synthetic and real-world industrial datasets demonstrate the efficacy and robustness of KProxNPLVM for soft sensor applications.

Abstract

Nonlinear Probabilistic Latent Variable Models (NPLVMs) are a cornerstone of soft sensor modeling due to their capacity for uncertainty delineation. However, conventional NPLVMs are trained using amortized variational inference, where neural networks parameterize the variational posterior. While facilitating model implementation, this parameterization converts the distributional optimization problem within an infinite-dimensional function space to parameter optimization within a finite-dimensional parameter space, which introduces an approximation error gap, thereby degrading soft sensor modeling accuracy. To alleviate this issue, we introduce KProxNPLVM, a novel NPLVM that pivots to relaxing the objective itself and improving the NPLVM's performance. Specifically, we first prove the approximation error induced by the conventional approach. Based on this, we design the Wasserstein distance as the proximal operator to relax the learning objective, yielding a new variational inference strategy derived from solving this relaxed optimization problem. Based on this foundation, we provide a rigorous derivation of KProxNPLVM's optimization implementation, prove the convergence of our algorithm can finally sidestep the approximation error, and propose the KProxNPLVM by summarizing the abovementioned content. Finally, extensive experiments on synthetic and real-world industrial datasets are conducted to demonstrate the efficacy of the proposed KProxNPLVM.