Physics-Informed Reinforcement Learning of Spatial Density Velocity Potentials for Map-Free Racing

arXiv cs.RO / 4/13/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of map-free autonomous racing by using reinforcement learning that operates directly from instantaneous sensor data while respecting acceleration and tire-friction limits.
  • It proposes a physics-informed DRL approach that parameterizes nonlinear vehicle dynamics from the spectral distribution of depth measurements and learns time-optimal and overtaking controls.
  • To improve simulation-to-reality transfer and hardware stability, the method employs a physics-engine exploit-aware reward and replaces an explicit collision penalty with an implicit truncation of the value horizon.
  • The learned policy achieves stronger out-of-distribution performance than human demonstrations, including a reported 12% improvement on OOD tracks on proportionally scaled hardware, while maximizing the friction circle using tire dynamics resembling an empirical Pacejka model.
  • System identification results suggest a two-stage network behavior: the first layer compresses spatial observations into higher-resolution digitized track features (especially near corner apexes), and the second layer encodes nonlinear vehicle dynamics.

Abstract

Autonomous racing without prebuilt maps is a grand challenge for embedded robotics that requires kinodynamic planning from instantaneous sensor data at the acceleration and tire friction limits. Out-Of-Distribution (OOD) generalization to various racetrack configurations utilizes Machine Learning (ML) to encode the mathematical relation between sensor data and vehicle actuation for end-to-end control, with implicit localization. These comprise Behavioral Cloning (BC) that is capped to human reaction times and Deep Reinforcement Learning (DRL) which requires large-scale collisions for comprehensive training that can be infeasible without simulation but is arduous to transfer to reality, thus exhibiting greater performance than BC in simulation, but actuation instability on hardware. This paper presents a DRL method that parameterizes nonlinear vehicle dynamics from the spectral distribution of depth measurements with a non-geometric, physics-informed reward, to infer vehicle time-optimal and overtaking racing controls with an Artificial Neural Network (ANN) that utilizes less than 1% of the computation of BC and model-based DRL. Slaloming from simulation to reality transfer and variance-induced conservatism are eliminated with the combination of a physics engine exploit-aware reward and the replacement of an explicit collision penalty with an implicit truncation of the value horizon. The policy outperforms human demonstrations by 12% in OOD tracks on proportionally scaled hardware, by maximizing the friction circle with tire dynamics that resemble an empirical Pacejka tire model. System identification illuminates a functional bifurcation where the first layer compresses spatial observations to extract digitized track features with higher resolution in corner apexes, and the second encodes nonlinear dynamics.