HAD: Combining Hierarchical Diffusion with Metric-Decoupled RL for End-to-End Driving

arXiv cs.RO / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HAD, an end-to-end autonomous driving planning framework that combines hierarchical diffusion (coarse-to-fine trajectory refinement) with reinforcement learning for better trajectory optimization.
  • It proposes Structure-Preserved Trajectory Expansion to generate more realistic diffusion candidates while preserving kinematic structure, aiming to reduce denoising difficulty caused by unrealistic Gaussian perturbations.
  • For learning, it presents Metric-Decoupled Policy Optimization (MDPO), which optimizes multiple driving objectives using structured signals rather than a single coupled reward.
  • Experiments report new state-of-the-art results on both NAVSIM and HUGSIM, with improvements of +2.3 EPDMS on NAVSIM and +4.9 Route Completion on HUGSIM.
  • Overall, the work targets both the candidate-selection/trajectory-generation bottleneck in diffusion-based decoding and the optimization limitations of prior end-to-end RL approaches.

Abstract

End-to-end planning has emerged as a dominant paradigm for autonomous driving, where recent models often adopt a scoring-selection framework to choose trajectories from a large set of candidates, with diffusion-based decoding showing strong promise. However, directly selecting from the entire candidate space remains difficult to optimize, and Gaussian perturbations used in diffusion often introduce unrealistic trajectories that complicate the denoising process. In addition, for training these models, reinforcement learning (RL) has shown promise, but existing end-to-end RL approaches typically rely on a single coupled reward without structured signals, limiting optimization effectiveness. To address these challenges, we propose HAD, an end-to-end planning framework with a Hierarchical Diffusion Policy that decomposes planning into a coarse-to-fine process. To improve trajectory generation, we introduce Structure-Preserved Trajectory Expansion, which produces realistic candidates while maintaining kinematic structure. For policy learning, we develop Metric-Decoupled Policy Optimization (MDPO) to enable structured RL optimization across multiple driving objectives. Extensive experiments show that HAD achieves new state-of-the-art performance on both NAVSIM and HUGSIM, outperforming prior arts by a huge margin: +2.3 EPDMS on NAVSIM and +4.9 Route Completion on HUGSIM.