Beyond Scalar Rewards: Distributional Reinforcement Learning with Preordered Objectives for Safe and Reliable Autonomous Driving

arXiv cs.RO / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that scalarizing multiple driving objectives in RL (e.g., safety vs. efficiency) can collapse priority information and lead to policies that violate safety-critical constraints.
  • It introduces the Preordered Multi-Objective MDP (Pr-MOMDP), which represents objectives with an explicit precedence (preorder) structure rather than combining them into a single weighted reward.
  • To operationalize this, the authors extend distributional RL using Quantile Dominance (QD), a pairwise comparison metric that evaluates action return distributions without compressing them into one statistic.
  • They propose an algorithm for extracting non-dominated action subsets across objectives, so precedence directly shapes both decision-making and training targets.
  • Experiments on CARLA using Implicit Quantile Networks (IQN) show improved success rates and fewer collisions/off-road events, along with statistically more robust policies than IQN and ensemble-IQN baselines.

Abstract

Autonomous driving involves multiple, often conflicting objectives such as safety, efficiency, and comfort. In reinforcement learning (RL), these objectives are typically combined through weighted summation, which collapses their relative priorities and often yields policies that violate safety-critical constraints. To overcome this limitation, we introduce the Preordered Multi-Objective MDP (Pr-MOMDP), which augments standard MOMDPs with a preorder over reward components. This structure enables reasoning about actions with respect to a hierarchy of objectives rather than a scalar signal. To make this structure actionable, we extend distributional RL with a novel pairwise comparison metric, Quantile Dominance (QD), that evaluates action return distributions without reducing them into a single statistic. Building on QD, we propose an algorithm for extracting optimal subsets, the subset of actions that remain non-dominated under each objective, which allows precedence information to shape both decision-making and training targets. Our framework is instantiated with Implicit Quantile Networks (IQN), establishing a concrete implementation while preserving compatibility with a broad class of distributional RL methods. Experiments in Carla show improved success rates, fewer collisions and off-road events, and deliver statistically more robust policies than IQN and ensemble-IQN baselines. By ensuring policies respect rewards preorder, our work advances safer, more reliable autonomous driving systems.