AI Navigate

Cluster-Aware Attention-Based Deep Reinforcement Learning for Pickup and Delivery Problems

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • CAADRL presents cluster-aware encoding and hierarchical decoding to exploit PDP's multi-scale structure, using a Transformer-based encoder with global self-attention and intra-cluster attention on depot, pickup, and delivery nodes.
  • It employs a Dynamic Dual-Decoder with a learnable gate to balance intra-cluster routing and inter-cluster transitions at each step, trained end-to-end with a POMO-style policy gradient and multiple symmetric rollouts.
  • Experiments on synthetic clustered and uniform PDP benchmarks show CAADRL matches or exceeds state-of-the-art baselines on clustered instances and remains competitive on uniform instances, especially as problem size grows, with significantly lower inference time than neural collaborative-search baselines.
  • The work demonstrates that explicitly modeling cluster structure provides a strong inductive bias, delivering both performance gains and efficiency for neural PDP solvers.

Abstract

The Pickup and Delivery Problem (PDP) is a fundamental and challenging variant of the Vehicle Routing Problem, characterized by tightly coupled pickup--delivery pairs, precedence constraints, and spatial layouts that often exhibit clustering. Existing deep reinforcement learning (DRL) approaches either model all nodes on a flat graph, relying on implicit learning to enforce constraints, or achieve strong performance through inference-time collaborative search at the cost of substantial latency. In this paper, we propose \emph{CAADRL} (Cluster-Aware Attention-based Deep Reinforcement Learning), a DRL framework that explicitly exploits the multi-scale structure of PDP instances via cluster-aware encoding and hierarchical decoding. The encoder builds on a Transformer and combines global self-attention with intra-cluster attention over depot, pickup, and delivery nodes, producing embeddings that are both globally informative and locally role-aware. Based on these embeddings, we introduce a Dynamic Dual-Decoder with a learnable gate that balances intra-cluster routing and inter-cluster transitions at each step. The policy is trained end-to-end with a POMO-style policy gradient scheme using multiple symmetric rollouts per instance. Experiments on synthetic clustered and uniform PDP benchmarks show that CAADRL matches or improves upon strong state-of-the-art baselines on clustered instances and remains highly competitive on uniform instances, particularly as problem size increases. Crucially, our method achieves these results with substantially lower inference time than neural collaborative-search baselines, suggesting that explicitly modeling cluster structure provides an effective and efficient inductive bias for neural PDP solvers.