HardNet++: Nonlinear Constraint Enforcement in Neural Networks

arXiv cs.LG / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • HardNet++ is a neural-network constraint-enforcement method designed to guarantee satisfaction of general nonlinear equality and inequality constraints at inference, not just reduce violations during training.
  • The method iteratively refines network outputs by applying damped local linearizations, and each iteration is differentiable to enable end-to-end training with the constraint layer active.
  • Unlike earlier schemes that work only for specific constraint forms (such as linear constraints) through specialized parameterizations or projection layers, HardNet++ targets broader nonlinear constraint settings.
  • The paper claims that, given certain regularity conditions, the iterative procedure can meet nonlinear constraints to arbitrary tolerance while maintaining optimality in a learning-for-optimization setup.
  • An application to model predictive control demonstrates tight adherence to nonlinear state constraints without sacrificing optimality.

Abstract

Enforcing constraint satisfaction in neural network outputs is critical for safety, reliability, and physical fidelity in many control and decision-making applications. While soft-constrained methods penalize constraint violations during training, they do not guarantee constraint adherence during inference. Other approaches guarantee constraint satisfaction via specific parameterizations or a projection layer, but are tailored to specific forms (e.g., linear constraints), limiting their utility in other general problem settings. Many real-world problems of interest are nonlinear, motivating the development of methods that can enforce general nonlinear constraints. To this end, we introduce HardNet++, a constraint-enforcement method that simultaneously satisfies linear and nonlinear equality and inequality constraints. Our approach iteratively adjusts the network output via damped local linearizations. Each iteration is differentiable, admitting an end-to-end training framework, where the constraint satisfaction layer is active during training. We show that under certain regularity conditions, this procedure can enforce nonlinear constraint satisfaction to arbitrary tolerance. Finally, we demonstrate tight constraint adherence without loss of optimality in a learning-for-optimization context, where we apply this method to a model predictive control problem with nonlinear state constraints.