Near-Optimal Cryptographic Hardness of Learning With Homogeneous Halfspaces Under Gaussian Marginals

arXiv cs.LG / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies learning and auditing tasks where the goal is to identify homogeneous halfspaces from labeled examples under Gaussian feature marginals.
  • It considers three settings—agnostic learning, one-sided reliable learning, and fairness auditing—and targets near-best performance according to the corresponding loss measures.
  • The authors establish near-optimal computational hardness results by reducing the problem hardness from the widely believed Learning With Errors (LWE) assumption.
  • Compared with prior work that focused largely on general (not necessarily homogeneous) halfspaces, the results extend hardness to the homogeneous case and substantially tighten the gap for agnostically learning homogeneous halfspaces under Gaussian marginals.
  • Overall, the contribution is a stronger theoretical boundary on what is computationally feasible for these Gaussian halfspace learning and fairness-related problems under LWE.

Abstract

We study three problems that involve identifying homogeneous halfspaces under Gaussian distributions: agnostic learning, one-sided reliable learning, and fairness auditing. In each of these problems, we are given labeled examples (\mathbf{x}, \mathrm{y}) drawn from an unknown distribution on \mathbb{R}^d\times\{-1, +1\}, whose marginal distribution on \mathbf{x} is standard Gaussian and on \mathrm{y} is arbitrary. The goal of each problem is to output a homogeneous halfspace that approaches the best-fitting homogeneous halfspace in terms of its corresponding loss measure. We prove near-optimal computational hardness results for these problems under the widely believed hardness assumption of the Learning With Errors (LWE) problem. Prior hardness results for these problems were mostly established for general halfspaces; our findings extend some of these hardness results to homogeneous halfspaces. Remarkably, our lower bound strictly generalizes over prior works and narrows the gap between the upper and lower bounds for agnostically learning homogeneous halfspaces under Gaussian marginals.