Reparameterization through Coverings and Topological Weight Priors

arXiv cs.LG / 4/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper generalizes the VAE reparameterization trick to latent spaces with non-trivial topology by using covering maps between manifolds.
  • Because covering maps are measurable (and under certain measure-preserving conditions), the authors derive an inequality that makes the VAE ELBO’s KL term analytically tractable even when the latent manifold is topologically complex.
  • The proposed framework relates to (and is seen as encompassing) prior work on reparameterization on Lie groups via the exponential map, but allows more general topologies where global diffeomorphisms are not available.
  • The authors validate the method by building KleinVAE, a VAE whose latent space is a Klein bottle topology, and they successfully learn an artificial dataset using this architecture.
  • They discuss using topology-informed generative models as weight priors in Bayesian learning, with particular relevance to convolutional vision models where the Klein bottle manifold has been suggested to matter.

Abstract

We generalise the reparameterization trick applied in variational autoencoders (VAEs) letting these have latent spaces of non-trivial topology - i.e. that of base manifolds covered with other ones, on which some technique for RT is available. That is possible since covering maps are measurable - moreover, in case of particular measure preservation property holding for the covering, one can establish an inequality on KL-divergence between pushforward (PF) densities on the base latent manifold, making the KL-term of VAE's ELBO analytically tractable, despite the topological non-triviality of the supporting latent manifold. Our development follows a route close but somewhat alternative to reparameterization on Lie groups, the latest proposal for which is to reparameterize PFs of normal densities from the Lie algebra - "through" the exponential map, seen by us as sometimes a particular case of what we propose to call reparameterization through a covering. Covering maps need not be global diffeomorphisms (although Lie-exp maps, in general, need not either, but, to date only smooth ones were considered in this context, to the best of our knowledge), which makes many non-trivial topologies tamable to our proposed technique, that we detail on a particular such example. We demonstrate the working of our approach by constructing a VAE with the latent space of Klein bottle (not a Lie group) topology, which we call KleinVAE, successfully learning an appropriate artificial dataset. We discuss potential applicability of such topology-informed generative models as weight priors in Bayesian learning, particularly for convolutional vision models, where said manifold was peculiarly shown to have some relevance.