Learning Multimodal Energy-Based Model with Multimodal Variational Auto-Encoder via MCMC Revision

arXiv cs.LG / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a method for learning multimodal energy-based models (EBMs) that addresses poor mixing in MCMC-based maximum-likelihood training in the joint data space.
  • It combines multimodal VAEs with EBMs by jointly training a shared latent generator and a joint inference model using interwoven maximum-likelihood updates and MCMC refinements in both data and latent spaces.
  • The generator is trained to output coherent multimodal samples that act as good initial states for EBM sampling, improving the subsequent Langevin dynamics.
  • The inference model is trained to provide informative latent initializations for sampling from the generator’s posterior, improving latent-space exploration.
  • Experiments and ablation/analysis results show improved multimodal synthesis quality and coherence versus multiple baselines, along with evidence of scalability.

Abstract

Energy-based models (EBMs) are a flexible class of deep generative models and are well-suited to capture complex dependencies in multimodal data. However, learning multimodal EBM by maximum likelihood requires Markov Chain Monte Carlo (MCMC) sampling in the joint data space, where noise-initialized Langevin dynamics often mixes poorly and fails to discover coherent inter-modal relationships. Multimodal VAEs have made progress in capturing such inter-modal dependencies by introducing a shared latent generator and a joint inference model. However, both the shared latent generator and joint inference model are parameterized as unimodal Gaussian (or Laplace), which severely limits their ability to approximate the complex structure induced by multimodal data. In this work, we study the learning problem of the multimodal EBM, shared latent generator, and joint inference model. We present a learning framework that effectively interweaves their MLE updates with corresponding MCMC refinements in both the data and latent spaces. Specifically, the generator is learned to produce coherent multimodal samples that serve as strong initial states for EBM sampling, while the inference model is learned to provide informative latent initializations for generator posterior sampling. Together, these two models serve as complementary models that enable effective EBM sampling and learning, yielding realistic and coherent multimodal EBM samples. Extensive experiments demonstrate superior performance for multimodal synthesis quality and coherence compared to various baselines. We conduct various analyses and ablation studies to validate the effectiveness and scalability of the proposed multimodal framework.