An Efficient Black-Box Reduction from Online Learning to Multicalibration, and a New Route to $\Phi$-Regret Minimization

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a Gordon-Greenwald-Marks (GGM)-style black-box reduction that turns any no-regret learner over a function class H into a method for achieving online multicalibration by combining it with an expected variational inequality (EVI) solver.
  • It proves a converse direction: efficient online multicalibration entails efficient EVI solving, drawing an explicit parallel to the fixed-point role in the original GGM framework for Φ-regret.
  • These results resolve a key open question from the SODA ’24 work of Garg, Jung, Reingold, and Roth by showing that oracle-efficient online multicalibration with √T-type guarantees is achievable in full generality.
  • The authors further show a fine-grained reduction from high-dimensional online multicalibration to contextual Φ-regret minimization, offering a simpler and faster route from external regret to Φ-regret than prior approaches, and enabling new algorithms robust to richer deviation classes (including RKHS-based classes).

Abstract

We give a Gordon-Greenwald-Marks (GGM) style black-box reduction from online learning to online multicalibration. Concretely, we show that to achieve high-dimensional multicalibration with respect to a class of functions H, it suffices to combine any no-regret learner over H with an expected variational inequality (EVI) solver. We also prove a converse statement showing that efficient multicalibration implies efficient EVI solving, highlighting how EVIs in multicalibration mirror the role of fixed points in the GGM result for \Phi-regret. This first set of results resolves the main open question in Garg, Jung, Reingold, and Roth (SODA '24), showing that oracle-efficient online multicalibration with \sqrt{T}-type guarantees is possible in full generality. Furthermore, our GGM-style reduction unifies the analyses of existing online multicalibration algorithms, enables new algorithms for challenging environments with delayed observations or censored outcomes, and yields the first efficient black-box reduction between online learning and multiclass omniprediction. Our second main result is a fine-grained reduction from high-dimensional online multicalibration to (contextual) \Phi-regret minimization. Together with our first result, this establishes a new route from external regret to Phi-regret that bypasses sophisticated fixed-point or semi-separation machinery, dramatically simplifies a result of Daskalakis, Farina, Fishelson, Pipis, and Schneider (STOC '25) while improving rates, and yields new algorithms that are robust to richer deviation classes, such as those belonging to any reproducing kernel Hilbert space.