VAE-Inf: A statistically interpretable generative paradigm for imbalanced classification
arXiv cs.LG / 4/29/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- VAE-Inf is a two-stage generative-to-discriminative framework designed to improve imbalanced classification when minority samples are extremely scarce.
- It first trains a VAE only on majority-class data to learn a reference distribution, aggregates latent posteriors via a Wasserstein barycenter, and builds a geometrically principled global Gaussian baseline for the majority class.
- In the second stage, it fine-tunes the encoder using limited minority data with a new distribution-aware loss that enforces probabilistic class separation based on variance-normalized projection statistics.
- For inference, VAE-Inf uses a projection-based scoring method that supports hypothesis testing, enabling distribution-free calibration and exact finite-sample Type-I error (false positive rate) control without restrictive parametric assumptions.
- Experiments across multiple real-world benchmarks show competitive performance compared with other methods, and the code is available on request.
Related Articles

The foundational UK sovereign-AI patents are filed. The collaboration door is open.
Dev.to

Building a Shopify app with Claude Code — spec-driven development and pricing design
Dev.to

The AI Habit That Pays Dividends (And Takes Zero Extra Time)
Dev.to

From Chaos to Clarity: AI-Powered Client Portals for Designers
Dev.to

I Used to Treat AI Like a Search Engine. Then I Realized I Was Doing It Wrong.
Dev.to