Decision-Focused Federated Learning Under Heterogeneous Objectives and Constraints
arXiv stat.ML / 4/23/2026
💬 OpinionDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper proposes a Decision-Focused Federated Learning (DFFL) framework where clients train predictive models that are used in downstream linear optimization problems without sharing raw data.
- It extends the SPO+ surrogate-loss approach to settings with heterogeneous client objectives and feasibility constraints, deriving heterogeneity bounds using support-function representations and separating objective-shift vs feasible-set-shift effects.
- For strongly convex feasible regions, the authors obtain sharper theoretical bounds thanks to optimizer stability.
- They introduce a heuristic local-versus-federated decision rule showing that federation improves decision quality when the heterogeneity penalty is outweighed by the statistical benefit of pooling data.
- Experiments with a FedAvg-style DFFL implementation indicate robust performance in strongly convex problems, while degradation in polyhedral problems is driven mainly by constraint heterogeneity (notably for clients with more samples).
Related Articles

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Elevating Austria: Google invests in its first data center in the Alps.
Google Blog

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to

AI Tutor That Works Offline — Study Anywhere with EaseLearn AI
Dev.to