Lagrangian Relaxation Score-based Generation for Mixed Integer linear Programming

arXiv cs.LG / 2026/3/26

📰 ニュースSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper introduces SRG, a generative predict-and-search framework for mixed-integer linear programming that uses Lagrangian relaxation to guide sampling toward feasible and near-optimal solutions.
  • SRG replaces deterministic single-point predictions with diverse generation via Lagrangian relaxation-guided stochastic differential equations (SDEs), aiming to reduce the need for extensive downstream search.
  • It incorporates convolutional kernels to model inter-variable dependencies, producing candidate solutions that form compact, effective trust-region subproblems for standard MILP solvers.
  • Experiments on multiple public MILP benchmarks show SRG outperforms existing machine-learning baselines in solution quality.
  • The method demonstrates zero-shot transfer to unseen cross-scale instances, achieving competitive optimality with strong reductions in computational overhead versus state-of-the-art exact solvers.

Abstract

Predict-and-search (PaS) methods have shown promise for accelerating mixed-integer linear programming (MILP) solving. However, existing approaches typically assume variable independence and rely on deterministic single-point predictions, which limits solution diversityand often necessitates extensive downstream search for high-quality solutions. In this paper, we propose \textbf{SRG}, a generative framework based on Lagrangian relaxation-guided stochastic differential equations (SDEs), with theoretical guarantees on solution quality. SRG leverages convolutional kernels to capture inter-variable dependencies while integrating Lagrangian relaxation to guide the sampling process toward feasible and near-optimal regions. Rather than producing a single estimate, SRG generates diverse, high-quality solution candidates that collectively define compact and effective trust-region subproblems for standard MILP solvers. Across multiple public benchmarks, SRG consistently outperforms existing machine learning baselines in solution quality. Moreover, SRG demonstrates strong zero-shot transferability: on unseen cross-scale/problem instances, it achieves competitive optimality with state-of-the-art exact solvers while significantly reducing computational overhead through faster search and superior solution quality.