VAE-Inf: A statistically interpretable generative paradigm for imbalanced classification

arXiv cs.LG / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • VAE-Inf is a two-stage generative-to-discriminative framework designed to improve imbalanced classification when minority samples are extremely scarce.
  • It first trains a VAE only on majority-class data to learn a reference distribution, aggregates latent posteriors via a Wasserstein barycenter, and builds a geometrically principled global Gaussian baseline for the majority class.
  • In the second stage, it fine-tunes the encoder using limited minority data with a new distribution-aware loss that enforces probabilistic class separation based on variance-normalized projection statistics.
  • For inference, VAE-Inf uses a projection-based scoring method that supports hypothesis testing, enabling distribution-free calibration and exact finite-sample Type-I error (false positive rate) control without restrictive parametric assumptions.
  • Experiments across multiple real-world benchmarks show competitive performance compared with other methods, and the code is available on request.

Abstract

Imbalanced classification remains a pervasive challenge in machine learning, particularly when minority samples are too scarce to provide a robust discriminative boundary. In such extreme scenarios, conventional models often suffer from unstable decision boundaries and a lack of reliable error control. To bridge the gap between generative modeling and discriminative classification, we propose a two-stage framework \textbf{VAE-Inf} that integrates deep representation learning with statistically interpretable hypothesis testing. In the first stage, we adopt a one-class modeling perspective by training a variational autoencoder (VAE) exclusively on majority-class data to capture the underlying reference distribution. The resulting latent posteriors are aggregated via a Wasserstein barycenter to construct a global Gaussian reference model, providing a geometrically principled baseline for the majority class. In the second stage, we transform this generative foundation into a discriminative classifier by fine-tuning the encoder with limited minority samples. This is achieved through a novel distribution-aware loss that enforces probabilistic separation between classes based on variance-normalized projection statistics. For inference, we introduce a projection-based score that admits a natural hypothesis testing interpretation, allowing for a distribution-free calibration procedure. This approach yields exact finite-sample control of the Type-I error (false positive rate) without relying on restrictive parametric assumptions. Extensive experiments on diverse real-world benchmarks demonstrate that our framework achieves competitive performance against other approaches. The codes are available upon request.