Graph Neural Network-Informed Predictive Flows for Faster Ford-Fulkerson and PAC-Learnability

arXiv cs.LG / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a learning-augmented approach that combines Graph Neural Networks (GNNs) with the Ford-Fulkerson max-flow algorithm to speed up max-flow computation and image segmentation.
  • Instead of predicting an initial flow, it learns edge importance probabilities (via a message-passing GNN) to better choose which augmenting paths to explore, using these probabilities in a priority queue.
  • The method builds a grid-based flow network from an input image and performs only one GNN inference per problem instance, avoiding repeated neural inference over evolving residual graphs.
  • It adds a bottleneck-aware, Edmonds-Karp-style search and a bidirectional path-construction strategy centered on high-probability edges, aiming to reduce the number of augmentations while preserving max-flow/min-cut optimality.
  • The authors provide theory connecting prediction quality to efficiency using a weighted permutation distance metric and propose a hybrid extension that warm-starts flows alongside edge-priority prediction for segmentation.

Abstract

We propose a learning-augmented framework for accelerating max-flow computation and image segmentation by integrating Graph Neural Networks (GNNs) with the Ford-Fulkerson algorithm. Rather than predicting initial flows, our method learns edge importance probabilities to guide augmenting path selection. We introduce a Message Passing GNN (MPGNN) that jointly learns node and edge embeddings through coupled updates, capturing both global structure and local flow dynamics such as residual capacity and bottlenecks. Given an input image, we propose a method to construct a grid-based flow network with source and sink nodes, extract features, and perform a single GNN inference to assign edge probabilities reflecting their likelihood of belonging to high-capacity cuts. These probabilities are stored in a priority queue and used to guide a modified Ford-Fulkerson procedure, prioritizing augmenting paths via an Edmonds-Karp-style search with bottleneck-aware tie-breaking. This avoids repeated inference over residual graphs while leveraging learned structure throughout optimization. We further introduce a bidirectional path construction strategy centered on high-probability edges and provide a theoretical framework relating prediction quality to efficiency via a weighted permutation distance metric. Our method preserves max-flow/min-cut optimality while reducing the number of augmentations in practice. We also outline a hybrid extension combining flow warm-starting with edge-priority prediction, establishing a foundation for learning-guided combinatorial optimization in image segmentation.