The Sample Complexity of Multicalibration

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes the minimax sample complexity of multicalibration in the batch setting, where a learner must produce a predictor whose population Expected Calibration Error (ECE) is at most ε over a specified family of groups.
  • For any fixed κ>0 in the regime |G| ≤ ε^{-κ}, it proves that Θ~(ε^{-3}) samples are both necessary and sufficient (up to polylog factors), and this holds even for randomized predictors via an online-to-batch reduction.
  • It shows a clear separation from marginal calibration: marginal calibration needs only Θ~(ε^{-2}) samples, and the work suggests mean-ECE multicalibration is as hard in batch as in online, unlike marginal calibration which is harder online.
  • When κ=0 (a sharp threshold case), multicalibration’s sample complexity remains Θ~(ε^{-2}); more broadly, the authors derive matching upper/lower bounds for weighted L_p multicalibration metrics for 1 ≤ p ≤ 2 with optimal exponent 3/p, and extend lower bounds to elicitable properties such as expectiles and bounded-density quantiles.

Abstract

We study the minimax sample complexity of multicalibration in the batch setting. A learner observes n i.i.d. samples from an unknown distribution and must output a (possibly randomized) predictor whose population multicalibration error, measured by Expected Calibration Error (ECE), is at most \varepsilon with respect to a given family of groups. For every fixed \kappa > 0, in the regime |G|\le \varepsilon^{-\kappa}, we prove that \widetilde{\Theta}(\varepsilon^{-3}) samples are necessary and sufficient, up to polylogarithmic factors. The lower bound holds even for randomized predictors, and the upper bound is realized by a randomized predictor obtained via an online-to-batch reduction. This separates the sample complexity of multicalibration from that of marginal calibration, which scales as \widetilde{\Theta}(\varepsilon^{-2}), and shows that mean-ECE multicalibration is as difficult in the batch setting as it is in the online setting, in contrast to marginal calibration which is strictly more difficult in the online setting. In contrast we observe that for \kappa = 0, the sample complexity of multicalibration remains \widetilde{\Theta}(\varepsilon^{-2}) exhibiting a sharp threshold phenomenon. More generally, we establish matching upper and lower bounds, up to polylogarithmic factors, for a weighted L_p multicalibration metric for all 1 \le p \le 2, with optimal exponent 3/p. We also extend the lower-bound template to a regular class of elicitable properties, and combine it with the online upper bounds of Hu et al. (2025) to obtain matching bounds for calibrating properties including expectiles and bounded-density quantiles.