Model Selection and Parameter Estimation of Multi-dimensional Gaussian Mixture Model

arXiv stat.ML / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors derive an information-theoretic lower bound on the sample complexity for reliably selecting the model order in multi-dimensional Gaussian Mixture Models, showing that distinguishing a k-component mixture from a simpler model requires samples scaling as Omega(Δ^{-(4k-4)}).
  • They propose a thresholding-based, parameter-free estimator that uses the spectral gap of an empirical covariance computed from random Fourier measurements, with time complexity O(k^2 n).
  • Conditioned on the estimated model order, they introduce a gradient-based parameter estimation method with a data-driven, score-based initialization that achieves the optimal parametric convergence rate of O_p(n^{-1/2}) for estimating the component means.
  • In high-dimensional regimes where the ambient dimension exceeds the number of components (d > k), they integrate PCA for dimension reduction and demonstrate that their Fourier-based framework outperforms traditional EM methods in both estimation accuracy and computational time.

Abstract

In this paper, we study the problem of learning multi-dimensional Gaussian Mixture Models (GMMs), with a specific focus on model order selection and efficient mixing distribution estimation. We first establish an information-theoretic lower bound on the critical sample complexity required for reliable model selection. More specifically, we show that distinguishing a k-component mixture from a simpler model necessitates a sample size scaling of \Omega(\Delta^{-(4k-4)}). We then propose a thresholding-based estimation algorithm that evaluates the spectral gap of an empirical covariance matrix constructed from random Fourier measurement vectors. This parameter-free estimator operates with an efficient time complexity of \mathcal{O}(k^2 n), scaling linearly with the sample size. We demonstrate that the sample complexity of our method matches the established lower bound, confirming its minimax optimality with respect to the component separation distance \Delta. Conditioned on the estimated model order, we subsequently introduce a gradient-based minimization method for parameter estimation. To effectively navigate the non-convex objective landscape, we employ a data-driven, score-based initialization strategy that guarantees rapid convergence. We prove that this method achieves the optimal parametric convergence rate of \mathcal{O}_p(n^{-1/2}) for estimating the component means. To enhance the algorithm's efficiency in high-dimensional regimes where the ambient dimension exceeds the number of mixture components (i.e., \(d > k\)), we integrate principal component analysis (PCA) for dimension reduction. Numerical experiments demonstrate that our Fourier-based algorithmic framework outperforms conventional Expectation-Maximization (EM) methods in both estimation accuracy and computational time.