ATRS: Adaptive Trajectory Re-splitting via a Shared Neural Policy for Parallel Optimization
arXiv cs.RO / 4/27/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The ATRS framework targets stagnation in parallel long-horizon motion planning by adaptively re-splitting trajectory segments during ADMM optimization rather than relying on a fixed decomposition structure.
- ATRS integrates a shared Deep Reinforcement Learning policy into the parallel ADMM loop, modeling segment re-splitting as a Multi-Agent Shared-Policy Markov Decision Process with homogeneous agents and parameter sharing.
- The shared neural policy provides size invariance, allowing ATRS to handle a dynamically changing number of segments and to generalize across different trajectory lengths, including zero-shot transfer to unseen environments.
- A Confidence-Based Election mechanism improves solver stability by selecting only the most stagnating segment for re-splitting at each step.
- Experiments show performance gains of up to 26.0% fewer iterations and up to 19.1% lower computation time in simulations, and real-time onboard replanning within 35 ms per cycle without sim-to-real degradation.
- The core contribution is a learning-driven, solver-state-based adaptive re-splitting strategy that improves convergence while remaining practical for both offline and real-time onboard planning.

