Critic-Free Deep Reinforcement Learning for Maritime Coverage Path Planning on Irregular Hexagonal Grids

arXiv cs.RO / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a deep reinforcement learning framework for maritime Coverage Path Planning (CPP) using hexagonal grid representations of irregular regions such as coastlines, islands, and exclusion zones.
  • It reformulates CPP as a neural combinatorial optimization problem where a Transformer-based pointer policy autoregressively constructs coverage tours.
  • To stabilize long-horizon routing without a value function, the authors propose a critic-free Group-Relative Policy Optimization (GRPO) approach that computes advantages via within-instance comparisons of sampled trajectories.
  • Experiments on 1,000 unseen synthetic maritime environments report a 99.0% Hamiltonian success rate, outperforming the best heuristic (46.0%), alongside shorter paths and fewer heading changes versus baselines.
  • The method supports multiple inference modes (greedy, stochastic, and sampling with 2-opt refinement) with reported runtimes under 50 ms per instance on a laptop GPU, suggesting real-time onboard feasibility.

Abstract

Maritime surveillance missions, such as search and rescue and environmental monitoring, rely on the efficient allocation of sensing assets over vast and geometrically complex areas. Traditional Coverage Path Planning (CPP) approaches depend on decomposition techniques that struggle with irregular coastlines, islands, and exclusion zones, or require computationally expensive re-planning for every instance. We propose a Deep Reinforcement Learning (DRL) framework to solve CPP on hexagonal grid representations of irregular maritime areas. Unlike conventional methods, we formulate the problem as a neural combinatorial optimization task where a Transformer-based pointer policy autoregressively constructs coverage tours. To overcome the instability of value estimation in long-horizon routing problems, we implement a critic-free Group-Relative Policy Optimization (GRPO) scheme. This method estimates advantages through within-instance comparisons of sampled trajectories rather than relying on a value function. Experiments on 1,000 unseen synthetic maritime environments demonstrate that a trained policy achieves a 99.0% Hamiltonian success rate, more than double the best heuristic (46.0%), while producing paths 7% shorter and with 24% fewer heading changes than the closest baseline. All three inference modes (greedy, stochastic sampling, and sampling with 2-opt refinement) operate under 50~ms per instance on a laptop GPU, confirming feasibility for real-time on-board deployment.