Scalable Learning of Multivariate Distributions via Coresets

arXiv cs.LG / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • They introduce a novel coreset construction for multivariate conditional transformation models (MCTMs) to enhance scalability and training efficiency in semi-parametric regression analysis and density estimation.
  • This work presents the first coresets for semi-parametric distributional models, achieving substantial data reduction via importance sampling.
  • The method guarantees with high probability that the log-likelihood remains within a multiplicative (1±ε) error, preserving statistical model accuracy.
  • To address numerical issues with normalizing logarithmic terms, they adopt a geometric approximation based on the convex hull of the input data, enabling stable inference on large datasets.
  • Numerical experiments demonstrate significant computational efficiency gains for large and complex datasets, suggesting broad applicability in statistics and machine learning.

Abstract

Efficient and scalable non-parametric or semi-parametric regression analysis and density estimation are of crucial importance to the fields of statistics and machine learning. However, available methods are limited in their ability to handle large-scale data. We address this issue by developing a novel coreset construction for multivariate conditional transformation models (MCTMs) to enhance their scalability and training efficiency. To the best of our knowledge, these are the first coresets for semi-parametric distributional models. Our approach yields substantial data reduction via importance sampling. It ensures with high probability that the log-likelihood remains within multiplicative error bounds of (1\pm\varepsilon) and thereby maintains statistical model accuracy. Compared to conventional full-parametric models, where coresets have been incorporated before, our semi-parametric approach exhibits enhanced adaptability, particularly in scenarios where complex distributions and non-linear relationships are present, but not fully understood. To address numerical problems associated with normalizing logarithmic terms, we follow a geometric approximation based on the convex hull of input data. This ensures feasible, stable, and accurate inference in scenarios involving large amounts of data. Numerical experiments demonstrate substantially improved computational efficiency when handling large and complex datasets, thus laying the foundation for a broad range of applications within the statistics and machine learning communities.