Disentanglement of Sources in a Multi-Stream Variational Autoencoder

arXiv stat.ML / 4/2/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a Multi-Stream Variational Autoencoder (MS-VAE) that disentangles sources by combining discrete and continuous latent variables rather than using a single latent space as in typical VAEs.
  • Discrete latents are incorporated through an explicit source-combination model in the decoder, where multiple sources are superimposed as part of the generative process.
  • The authors formally define the MS-VAE framework and derive inference and learning equations, then validate the approach with numerical experiments.
  • Experiments include separating superimposed MNIST digits and performing speaker diarization for two-speaker conversation audio, both showing clear source separation and competitive performance.
  • The model is described as flexible and capable of strong results with limited supervision, including an example where only 10% of labels are used for pretraining.

Abstract

Variational autoencoders (VAEs) are among leading approaches to address the problem of learning disentangled representations. Typically a single VAE is used and disentangled representations are sought within its single continuous latent space. In this paper, we propose and provide a proof of concept for a novel Multi-Stream Variational Autoencoder (MS-VAE) that achieves disentanglement of sources by combining discrete and continuous latents. The discrete latents are used in an explicit source combination model, that superimposes a set of sources as part of the MS-VAE decoder. We formally define the MS-VAE approach, derive its inference and learning equations, and numerically investigate its principled functionality. The MS-VAE model is very flexible and can be trained using little supervision (we use fully unsupervised learning after pretraining with some labels). In our numerical experiments, we explored the ability of the MS-VAE approach in separating both superimposed hand-written digits as well as sound sources. For the former task we used superimposed MNIST digits (an increasingly common benchmark). For sound separation, our experiments focused on the task of speaker diarization in a recording conversation between two speakers. In all cases, we observe a clear separation of sources and competitive performance after training. For digit superpositions, performance is particularly competitive in complex mixtures (e.g., three and four digits). For the speaker diarization task, we observe an especially low rate of missed speakers and a more precise speaker attribution. Numerical experiments confirm the flexibility of the approach across varying amounts of supervision, and we observed high performance, e.g., when using just 10% of the labels for pretraining.