Efficient Parameter Estimation of Truncated Boolean Product Distributions

arXiv stat.ML / 5/5/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how to estimate parameters of Boolean product distributions when observed samples are truncated to a subset S of {0,1}^d that can be checked via a membership oracle.
  • It introduces a “fatness” notion for the truncation set, showing that if S is sufficiently fat then information in truncated samples suffices to reconstruct samples from the true distribution.
  • As a result, many tasks that are efficiently solvable for Boolean product distributions—such as learning in total variation distance, parameter estimation, and testing uniformity/identity—remain efficiently solvable from truncated data with only a modest increase in sample complexity.
  • The authors extend the framework to ranking models (Mallows models), demonstrating that fatness enables efficient parameter estimation from truncated samples there as well.
  • By analyzing what makes learning possible, the paper identifies three necessary conditions for efficient identifiability (richness of S, membership-oracle access, and preserved randomness in all directions) and proves they are also sufficient via an SGD-based approach adapted from prior work.

Abstract

We study the problem of estimating the parameters of a Boolean product distribution in d dimensions, when the samples are truncated by a set S \subset \{0, 1\}^d accessible through a membership oracle. This is the first time that the computational and statistical complexity of learning from truncated samples is considered in a discrete setting. We introduce a natural notion of fatness of the truncation set S, under which truncated samples reveal enough information about the true distribution. We show that if the truncation set is sufficiently fat, samples from the true distribution can be generated from truncated samples. A stunning consequence is that virtually any statistical task (e.g., learning in total variation distance, parameter estimation, uniformity or identity testing) that can be performed efficiently for Boolean product distributions, can also be performed from truncated samples, with a small increase in sample complexity. We generalize our approach to ranking distributions over d alternatives, where we show how fatness implies efficient parameter estimation of Mallows models from truncated samples. Exploring the limits of learning discrete models from truncated samples, we identify three natural conditions that are necessary for efficient identifiability: (i) the truncation set S should be rich enough; (ii) S should be accessible through membership queries; and (iii) the truncation by S should leave enough randomness in all directions. By carefully adapting the Stochastic Gradient Descent approach of (Daskalakis et al., FOCS 2018), we show that these conditions are also sufficient for efficient learning of truncated Boolean product distributions.

Efficient Parameter Estimation of Truncated Boolean Product Distributions | AI Navigate