Identifiable Deep Latent Variable Models for MNAR Data
arXiv stat.ML / 3/27/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses biased results in missing-not-at-random (MNAR) settings, pointing out that many deep-learning imputation approaches assume missing-at-random (MAR) and can fail when this assumption is violated.
- It introduces a framework using deep latent variable models and proves identifiability of the underlying data distribution under a “conditional no self-censoring” assumption given latent variables.
- The authors develop an estimation method based on importance-weighted autoencoders to learn unknown parameters efficiently.
- The work provides theoretical and empirical evidence that the proposed approach can recover the true joint distribution, subject to specified regularity conditions.
- Extensive simulations and real-world experiments show improved performance over classical and contemporary MNAR imputation baselines.
広告
Related Articles

STADLER reshapes knowledge work at a 230-year-old company
OpenAI Blog

AI Research Is Getting Harder to Separate From Geopolitics
Wired
Sparse Federated Representation Learning for circular manufacturing supply chains with zero-trust governance guarantees
Dev.to

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model
Reddit r/artificial

**Optimizing AI Agents: A Little-Known Technique to Improve
Dev.to