Large Neighborhood Search meets Iterative Neural Constraint Heuristics

arXiv cs.LG / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper links iterative neural constraint-satisfaction heuristics to Large Neighborhood Search (LNS) and reframes neural approaches using standard LNS destroy/repair operators.
  • It adapts ConsFormer into an LNS procedure, adding both classical and prediction-guided destroy operators that leverage the model’s internal scores to choose neighborhoods.
  • For repair, it uses ConsFormer as the neural repair operator and compares sampling-based decoding versus greedy decoding for generating assignments.
  • Experiments on Sudoku, Graph Coloring, and MaxCut show substantial improvements over the neural method’s vanilla setup and stronger competitiveness against classical and other neural baselines.
  • The authors identify recurring design patterns: stochastic destroy outperforms greedy destroy, while greedy repair is better for quickly finding a single high-quality feasible solution.

Abstract

Neural networks are being increasingly used as heuristics for constraint satisfaction. These neural methods are often recurrent, learning to iteratively refine candidate assignments. In this work, we make explicit the connection between such iterative neural heuristics and Large Neighborhood Search (LNS), and adapt an existing neural constraint satisfaction method-ConsFormer-into an LNS procedure. We decompose the resulting neural LNS into two standard components: the destroy and repair operators. On the destroy side, we instantiate several classical heuristics and introduce novel prediction-guided operators that exploit the model's internal scores to select neighborhoods. On the repair side, we utilize ConsFormer as a neural repair operator and compare the original sampling-based decoder to a greedy decoder that selects the most likely assignments. Through an empirical study on Sudoku, Graph Coloring, and MaxCut, we find that adapting the neural heuristic to an LNS procedure yields substantial gains over its vanilla settings and improves its competitiveness with classical and neural baselines. We further observe consistent design patterns across tasks: stochastic destroy operators outperform greedy ones, while greedy repair is more effective than sampling-based repair for finding a single high-quality feasible assignment. These findings highlight LNS as a useful lens and design framework for structuring and improving iterative neural approaches.