Instance-Adaptive Parametrization for Amortized Variational Inference

arXiv cs.LG / 4/9/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces instance-adaptive variational autoencoders (IA-VAE), which use a hypernetwork to generate input-dependent parameter modulations for an otherwise shared inference encoder.
  • IA-VAE aims to address the amortization gap in amortized variational inference by adding flexibility to the posterior approximation without losing the efficiency of a single forward pass.
  • Experiments on synthetic datasets with known true posteriors show IA-VAE produces more accurate posterior approximations and lowers the amortization gap versus standard VAEs.
  • On common image benchmarks, IA-VAE improves held-out ELBO and reports statistically significant gains across multiple runs, indicating consistent performance improvements.
  • Overall, the results suggest that instance-specific modulation of inference parametrization can be a key lever for reducing amortization-induced suboptimality in deep generative models.

Abstract

Latent variable models, including variational autoencoders (VAE), remain a central tool in modern deep generative modeling due to their scalability and a well-founded probabilistic formulation. These models rely on amortized variational inference to enable efficient posterior approximation, but this efficiency comes at the cost of a shared parametrization, giving rise to the amortization gap. We propose the instance-adaptive variational autoencoder (IA-VAE), an amortized variational inference framework in which a hypernetwork generates input-dependent modulations of a shared encoder. This enables input-specific adaptation of the inference model while preserving the efficiency of a single forward pass. By leveraging instance-specific parameter modulations, the proposed approach can achieve performance comparable to standard encoders with substantially fewer parameters, indicating a more efficient use of model capacity. Experiments on synthetic data, where the true posterior is known, show that IA-VAE yields more accurate posterior approximations and reduces the amortization gap. Similarly, on standard image benchmarks, IA-VAE consistently improves held-out ELBO over baseline VAEs, with statistically significant gains across multiple runs. These results suggest that increasing the flexibility of the inference parametrization through instance-adaptive modulation is a key factor in mitigating amortization-induced suboptimality in deep generative models.