Variational Learning of Fractional Posteriors

arXiv cs.LG / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a new one-parameter variational objective that lower-bounds data evidence while enabling estimation of approximate fractional posteriors.
  • It extends the method to hierarchical constructions and Bayes posteriors, aiming to provide a flexible probabilistic modeling framework.
  • The authors derive analytical gradients for two special cases and report simulation results on mixture models showing improved calibration versus standard variational posteriors.
  • When applied to variational autoencoders, the approach yields higher evidence bounds and supports joint learning of fractional and approximate Bayes posteriors.
  • VAEs trained with fractional posteriors are shown to produce decoders better aligned for generating samples from the prior.

Abstract

We introduce a novel one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors. We extend this framework to hierarchical construction and Bayes posteriors, offering a versatile tool for probabilistic modelling. We demonstrate two cases where gradients can be obtained analytically and a simulation study on mixture models showing that our fractional posteriors can be used to achieve better calibration compared to posteriors from the conventional variational bound. When applied to variational autoencoders (VAEs), our approach attains higher evidence bounds and enables learning of high-performing approximate Bayes posteriors jointly with fractional posteriors. We show that VAEs trained with fractional posteriors produce decoders that are better aligned for generation from the prior.