RAD-2: Scaling Reinforcement Learning in a Generator-Discriminator Framework

arXiv cs.CV / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces RAD-2, a generator–discriminator reinforcement learning framework for closed-loop motion planning in high-level autonomous driving under multimodal uncertainty.
  • It uses a diffusion-based generator to propose diverse trajectories, while an RL-optimized discriminator reranks candidates based on long-term driving quality to provide more effective negative feedback than imitation-only training.
  • RAD-2 improves RL training stability and credit assignment with Temporally Consistent Group Relative Policy Optimization, and it adds On-policy Generator Optimization to turn closed-loop feedback into structured optimization signals that guide the generator toward high-reward trajectories.
  • For scalable training and evaluation, the authors propose BEV-Warp, a high-throughput simulation environment that performs closed-loop testing directly in BEV feature space via spatial warping.
  • Experiments report a 56% reduction in collision rate versus strong diffusion-based planners, along with real-world gains in perceived safety and driving smoothness in complex urban traffic.

Abstract

High-level autonomous driving requires motion planners capable of modeling multimodal future uncertainties while remaining robust in closed-loop interactions. Although diffusion-based planners are effective at modeling complex trajectory distributions, they often suffer from stochastic instabilities and the lack of corrective negative feedback when trained purely with imitation learning. To address these issues, we propose RAD-2, a unified generator-discriminator framework for closed-loop planning. Specifically, a diffusion-based generator is used to produce diverse trajectory candidates, while an RL-optimized discriminator reranks these candidates according to their long-term driving quality. This decoupled design avoids directly applying sparse scalar rewards to the full high-dimensional trajectory space, thereby improving optimization stability. To further enhance reinforcement learning, we introduce Temporally Consistent Group Relative Policy Optimization, which exploits temporal coherence to alleviate the credit assignment problem. In addition, we propose On-policy Generator Optimization, which converts closed-loop feedback into structured longitudinal optimization signals and progressively shifts the generator toward high-reward trajectory manifolds. To support efficient large-scale training, we introduce BEV-Warp, a high-throughput simulation environment that performs closed-loop evaluation directly in Bird's-Eye View feature space via spatial warping. RAD-2 reduces the collision rate by 56% compared with strong diffusion-based planners. Real-world deployment further demonstrates improved perceived safety and driving smoothness in complex urban traffic.