[P] MCGrad: fix calibration of your ML model in subgroups

Reddit r/MachineLearning / 4/5/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • Meta has open-sourced MCGrad, a Python package designed to improve multicalibration by fixing model miscalibration within identifiable subgroups or feature intersections.
  • MCGrad reformulates multicalibration as a gradient-boosted decision tree process that learns to predict residual miscalibration from model features and then corrects it.
  • The approach is intended to scale to large datasets using techniques like early stopping to limit harm to overall predictive performance.
  • In Meta’s internal experience across 100+ production models, MCGrad improved log loss and PRAUC on 88% of models while substantially reducing subgroup calibration error.
  • The release includes a GitHub repository, docs, and a live Colab tutorial demonstrating the method in practice.

Hi r/MachineLearning,

We’re open-sourcing MCGrad, a Python package for multicalibration–developed and deployed in production at Meta. This work will also be presented at KDD 2026.

The Problem: A model can be globally calibrated yet significantly miscalibrated within identifiable subgroups or feature intersections (e.g., "users in region X on mobile devices"). Multicalibration aims to ensure reliability across such subpopulations.

The Solution: MCGrad reformulates multicalibration using gradient boosted decision trees. At each step, a lightweight booster learns to predict residual miscalibration of the base model given the features, automatically identifying and correcting miscalibrated regions. The method scales to large datasets, and uses early stopping to preserve predictive performance. See our tutorial for a live demo.

Key Results: Across 100+ production models at meta, MCGrad improved log loss and PRAUC on 88% of them while substantially reducing subgroup calibration error.

Links:

Install via pip install mcgrad or via conda. Happy to answer questions or discuss details.

submitted by /u/TaXxER
[link] [comments]