Shape-Adaptive Conditional Calibration for Conformal Prediction via Minimax Optimization

arXiv stat.ML / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the difficulty of achieving valid conditional coverage in conformal prediction under finite samples by reframing conditional coverage as marginal moment restrictions.
  • It proposes “Minimax Optimization Predictive Inference” (MOPI), which learns a flexible family of set-valued prediction mappings during calibration via a minimax optimization objective rather than using a fixed score-function-based sublevel set.
  • MOPI is designed to improve “shape adaptivity” of prediction sets while retaining a principled relationship to minimizing mean squared coverage error.
  • The authors provide non-asymptotic theoretical results (oracle inequalities) showing optimal-order convergence rates for coverage error under regular conditions.
  • They also demonstrate that MOPI can support valid inference conditional on sensitive attributes available only during calibration, and report empirical gains with more efficient prediction sets on complex conditional distributions.

Abstract

Achieving valid conditional coverage in conformal prediction is challenging due to the theoretical difficulty of satisfying pointwise constraints in finite samples. Building upon the characterization of conditional coverage through marginal moment restrictions, we introduce Minimax Optimization Predictive Inference (MOPI), a framework that generalizes prior work by optimizing over a flexible class of set-valued mappings during the calibration phase, rather than simply calibrating a fixed sublevel set. This minimax formulation effectively circumvents the structural constraints of predefined score functions, achieving superior shape adaptivity while maintaining a principled connection to the minimization of mean squared coverage error. Theoretically, we provide non-asymptotic oracle inequalities and show that the convergence rate of the coverage error attains the optimal order under regular conditions. The MOPI also enables valid inference conditional on sensitive attributes that are available during calibration but unobserved at test time. Empirical results on complex, non-standard conditional distributions demonstrate that MOPI produces more efficient prediction sets than existing baselines.