NLPOpt-Net: A Learning Method for Nonlinear Optimization with Feasibility Guarantees

arXiv cs.LG / 5/4/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • NLPOpt-Net is an unsupervised neural learning architecture for solving constrained nonlinear programs by learning parametric solution maps while guaranteeing constraint satisfaction.
  • The method combines a backbone neural network optimized with a modified Lagrangian-based loss and a multilayer projection that enforces feasibility by projecting predictions onto the original constraint manifold using local quadratic approximations.
  • It uses an inversion-free, modified Chambolle–Pock algorithm to solve the constrained quadratic projections during the forward pass, and employs the implicit function theorem for efficient backpropagation.
  • Reported results claim near-zero optimality gaps and constraint violations reduced to machine precision across large-scale convex QP/QCQP, NLP, and certain nonconvex problems, with accurate active sets and dual variables for scalable multiparametric programming.
  • The authors provide an open, ready-to-use package with GPU support and note that compiling the projection in C yields an order-of-magnitude inference-time improvement compared with JAX.

Abstract

Nonlinear Parametric Optimization Network (NLPOpt-Net) is an unsupervised learning architecture to solve constrained nonlinear programs (NLP). Given the structure of an NLP, it learns the parametric solution maps with guaranteed constraint satisfaction. The architecture consists of a backbone neural network (NN) followed by a multilayer (k-layered) projection. While the NN drives toward optimality through a loss function consisting of a modified Lagrangian augmented with a consistency loss, the projection ensures feasibility by projecting the NN predictions in the original constraint manifold. Instead of typical distance minimization, our projection exploits local quadratic approximations of the original NLP. Under certain conditions (such as convexity), the projection has a descent property, which improves the NN predictions further. NLPOpt-Net deploys an inversion-free, modified Chambolle-Pock algorithm to solve the constrained quadratic projections during the forward pass and uses the implicit function theorem for efficient backpropagation. The fixed structure of the projection further allows decoupling of the NN and the projection once the training is complete. NLPOpt-Net solves large-scale convex QP, QCQP, NLP, and nonconvex problems with near zero optimality gap and constraint violations reduced to machine precision. Additionally, it provides near accurate prediction of the active sets and corresponding dual variables, thereby enabling a scalable approach for multiparametric programming. Compiling the projection in C provides order of magnitude improvement in inference time compared to JAX. We provide the codes and NLPOpt-Net as a ready to use package that includes GPU support.