Learning Rollout from Sampling:An R1-Style Tokenized Traffic Simulation Model

arXiv cs.RO / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes R1Sim, a tokenized traffic simulation policy that learns diverse, high-fidelity multi-agent behaviors from human driving demonstrations.
  • It adapts the LLM-style next-token prediction approach for traffic simulation but addresses the limitation of reduced exploration by using entropy patterns of motion tokens to guide where to sample.
  • R1Sim introduces an entropy-guided adaptive sampling mechanism that targets motion tokens with high uncertainty and high potential that prior methods may underexplore.
  • The method further refines motion behaviors with Group Relative Policy Optimization (GRPO) using a safety-aware reward design to balance exploration and exploitation.
  • Experiments on the Waymo Sim Agent benchmark indicate that R1Sim delivers competitive results versus state-of-the-art approaches while producing realistic, safe, and diverse behaviors.

Abstract

Learning diverse and high-fidelity traffic simulations from human driving demonstrations is crucial for autonomous driving evaluation. The recent next-token prediction (NTP) paradigm, widely adopted in large language models (LLMs), has been applied to traffic simulation and achieves iterative improvements via supervised fine-tuning (SFT). However, such methods limit active exploration of potentially valuable motion tokens, particularly in suboptimal regions. Entropy patterns provide a promising perspective for enabling exploration driven by motion token uncertainty. Motivated by this insight, we propose a novel tokenized traffic simulation policy, R1Sim, which represents an initial attempt to explore reinforcement learning based on motion token entropy patterns, and systematically analyzes the impact of different motion tokens on simulation outcomes. Specifically, we introduce an entropy-guided adaptive sampling mechanism that focuses on previously overlooked motion tokens with high uncertainty yet high potential. We further optimize motion behaviors using Group Relative Policy Optimization (GRPO), guided by a safety-aware reward design. Overall, these components enable a balanced exploration-exploitation trade-off through diverse high-uncertainty sampling and group-wise comparative estimation, resulting in realistic, safe, and diverse multi-agent behaviors. Extensive experiments on the Waymo Sim Agent benchmark demonstrate that R1Sim achieves competitive performance compared to state-of-the-art methods.