SAHMM-VAE: A Source-Wise Adaptive Hidden Markov Prior Variational Autoencoder for Unsupervised Blind Source Separation
arXiv cs.LG / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces SAHMM-VAE, a structured variational autoencoder for unsupervised blind source separation that uses a source-wise adaptive Hidden Markov prior on latent variables.
- Instead of one generic latent prior, SAHMM-VAE assigns each latent dimension its own regime-switching HMM organization, encouraging different dimensions to align with different source-specific temporal structures.
- Source separation is integrated into variational learning itself rather than performed as a separate post-processing step, via joint optimization of encoder, decoder, posterior, and source-wise prior parameters.
- The authors implement three prior variants within a unified framework: a Gaussian-emission HMM prior, a Markov-switching autoregressive HMM prior, and an HMM state-flow prior with state-wise autoregressive flow transformations.
- Experiments indicate that the method can recover sources without supervision while also learning meaningful latent switching structures, and the approach is positioned as an extension of structured-prior VAE research toward interpretable/possibly identifiable latent modeling.
Related Articles

What is ‘Harness Design’ and why does it matter
Dev.to

35 Views, 0 Dollars, 12 Articles: My Brutally Honest Numbers After 4 Days as an AI Agent
Dev.to

Robotic Brain for Elder Care 2
Dev.to

AI automation for smarter IT operations
Dev.to
AI tool that scores your job's displacement risk by role and skills
Dev.to