On two ways to use determinantal point processes for Monte Carlo integration

arXiv cs.LG / 4/22/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how using determinantal point processes (DPPs), which generate repulsive samples, can improve Monte Carlo integration compared with standard independent-sample estimators.
  • It shows that replacing i.i.d. samples with a DPP can preserve consistency while producing variance rates that depend on how the DPP is matched to both the integrand f and the target distribution ω.
  • The authors compare two prior DPP-based approaches: one using a fixed DPP (Bardenet & Hardy) that can achieve faster decay for smooth functions, and another (Ermakov & Zolotukhin) that is unbiased but requires tailoring the DPP to f.
  • The work revisits these estimators, extends them to continuous settings, and provides practical sampling algorithms for implementing the methods.

Abstract

The standard Monte Carlo estimator \widehat{I}_N^{\mathrm{MC}} of \int fd\omega relies on independent samples from \omega and has variance of order 1/N. Replacing the samples with a determinantal point process (DPP), a repulsive distribution, makes the estimator consistent, with variance rates that depend on how the DPP is adapted to f and \omega. We examine two existing DPP-based estimators: one by Bardenet & Hardy (2020) with a rate of \mathcal{O}(N^{-(1+1/d)}) for smooth f, but relying on a fixed DPP. The other, by Ermakov & Zolotukhin (1960), is unbiased with rate of order 1/N, like Monte Carlo, but its DPP is tailored to f. We revisit these estimators, generalize them to continuous settings, and provide sampling algorithms.