Decision-Focused Learning via Tangent-Space Projection of Prediction Error

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key challenge in decision-focused learning (DFL): regret gradients are often hard to compute because they require differentiating through optimization solvers or using surrogate losses.
  • It derives a closed-form, geometric characterization of regret gradients under regularity conditions, showing they correspond to the prediction error projected onto the tangent space of active constraints and scaled by local curvature.
  • Building on this result, the authors introduce PEAR (Projected Error As Regret-gradient), which computes regret gradients by solving a reduced linear system only over active constraints.
  • Experiments on linear-programming benchmarks and a real-world quadratic-programming task indicate that PEAR delivers the best decision quality among baselines while also being the most computationally efficient.
  • The reported performance advantage remains even when constraints shift, suggesting the approach is robust to changes in the active constraint set.

Abstract

Decision-Focused Learning (DFL) trains predictors to improve downstream decision quality, but computing regret gradients typically requires differentiating through solvers or relying on surrogate losses, which can be computationally expensive or deviate from the true objective. We show that, under standard regularity with locally stable active constraints, the regret gradient admits a closed-form geometric characterization, equivalent to the prediction error projected onto the tangent space of active constraints, scaled by local curvature. This reveals that regret gradients can be obtained by filtering decision-irrelevant components from the MSE gradient, providing a simpler and more direct alternative to existing approaches. Based on this, we propose PEAR (Projected Error As Regret-gradient), which computes regret gradients via a reduced linear system over active constraints, avoiding differentiation through solver iterations or additional optimization solves. Experiments on LP benchmarks and a real-world QP task show that PEAR achieves the best decision quality among all baselines while being the most computationally efficient, with gains that persist under constraint shifts.