MCMC with Adaptive Principal-Component Transformation: Rotation-Invariant Universal Samplers for Bayesian Structural System Identification

arXiv stat.ML / 4/28/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces APM-SGHMC, a meta-learning MCMC method that adaptively rotates coordinate axes to match the principal-component (PC) directions of the current posterior samples for improved sampling efficiency.
  • It aims for rotation-invariant sampling performance by combining translation-, scale-, and rotation-invariance in a unified framework, reducing dependence on problem-specific coordinate choices.
  • Compared with prior trainable or meta-learning MCMC approaches, the method targets better generalization with fewer/more minimalist training tasks and avoids sampling-efficiency limits caused by simplified network design trade-offs.
  • The authors address practical feasibility concerns and validate the approach on Bayesian structural system identification case studies, reporting zero-shot generalization across structurally distinct models without retraining.
  • Results suggest the sampler can overcome case-by-case constraints typical of traditional data-driven approaches while maintaining consistently strong performance across scenarios.

Abstract

Over decades, Markov chain Monte Carlo (MCMC) methods have been widely studied, with a typical application being the quantification of posterior uncertainties in Bayesian system identification of structural dynamic models. To address the issue of excessively low sampling efficiency in generic MCMC methods when applied to specific problems, researchers developed several MCMC algorithms that integrate trainable neural networks to replace and enhance their critical components. Later, meta-learning MCMC methods emerged to reduce training time. However, they require considerable similarity between test and training tasks, while their sampling efficiency is constrained by trade-off-simplified network designs. This paper proposes the Adaptive Principal-Component (PC) Meta-learning Stochastic Gradient Hamiltonian Monte Carlo (APM-SGHMC) algorithm. It adaptively rotates coordinate axes in the parameter space to align with the PC directions of the current posterior samples, ensuring rotation-invariance of sampling performance with respect to the posterior distribution. By incorporating translation-invariance, scale-invariance, and rotation-invariance in a unified framework, APM-SGHMC enables universal samplers to acquire generalizable knowledge across diverse Bayesian system identification tasks using minimalistic tasks while eliminating the constraints imposed by network design trade-offs on sampling efficiency. Practical feasibility issues are also addressed. Two Bayesian system identification case studies demonstrate its effectiveness and universality: our method overcomes the case-by-case limitations of traditional data-driven approaches, achieving zero-shot generalization across structurally distinct models without retraining and maintaining consistent superior performance across all scenarios.