AI Navigate

High-Dimensional Gaussian Mean Estimation under Realizable Contamination

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies high-dimensional Gaussian mean estimation under a realizable ε-contamination model where each sample may be missing with a probability determined by a function r(x) between 0 and ε.
  • It establishes an information-computation gap in the Statistical Query model, showing that any algorithm must either use substantially more samples than information-theoretic limits or incur exponential runtime.
  • The authors provide an algorithm whose sample-time tradeoff nearly matches the derived lower bound, complementing the theoretical gap with a near-optimal practical approach.
  • Overall, the work qualitatively characterizes the computational complexity of Gaussian mean estimation under ε-realizable contamination and clarifies the limits of efficient computation in this setting.

Abstract

We study mean estimation for a Gaussian distribution with identity covariance in \mathbb{R}^d under a missing data scheme termed realizable \epsilon-contamination model. In this model an adversary can choose a function r(x) between 0 and \epsilon and each sample x goes missing with probability r(x). Recent work Ma et al., 2024 proposed this model as an intermediate-strength setting between Missing Completely At Random (MCAR) -- where missingness is independent of the data -- and Missing Not At Random (MNAR) -- where missingness may depend arbitrarily on the sample values and can lead to non-identifiability issues. That work established information-theoretic upper and lower bounds for mean estimation in the realizable contamination model. Their proposed estimators incur runtime exponential in the dimension, leaving open the possibility of computationally efficient algorithms in high dimensions. In this work, we establish an information-computation gap in the Statistical Query model (and, as a corollary, for Low-Degree Polynomials and PTF tests), showing that algorithms must either use substantially more samples than information-theoretically necessary or incur exponential runtime. We complement our SQ lower bound with an algorithm whose sample-time tradeoff nearly matches our lower bound. Together, these results qualitatively characterize the complexity of Gaussian mean estimation under \epsilon-realizable contamination.