Behavior-Constrained Reinforcement Learning with Receding-Horizon Credit Assignment for High-Performance Control

arXiv cs.RO / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a behavior-constrained reinforcement learning framework for robotics control that explicitly limits deviation from expert (human) behavior while still improving performance beyond demonstrations.
  • It uses a receding-horizon, predictive mechanism that performs trajectory-level credit assignment via look-ahead rewards during training, reflecting how expert-consistent behavior emerges over time.
  • The policy is conditioned on reference trajectories to capture variability in expert behavior under disturbances and changing conditions, modeling a distribution of acceptable behaviors rather than a single target.
  • Experiments on a high-fidelity race car simulation using professional-driver data show the learned policies achieve competitive lap times while staying closely aligned with expert driving style, outperforming baseline imitation/learning approaches in both performance and imitation quality.
  • The authors further validate the approach with driver-in-the-loop, human-grounded evaluation demonstrating reproduction of setup-dependent driving characteristics consistent with feedback from top professional race drivers.

Abstract

Learning high-performance control policies that remain consistent with expert behavior is a fundamental challenge in robotics. Reinforcement learning can discover high-performing strategies but often departs from desirable human behavior, whereas imitation learning is limited by demonstration quality and struggles to improve beyond expert data. We propose a behavior-constrained reinforcement learning framework that improves beyond demonstrations while explicitly controlling deviation from expert behavior. Because expert-consistent behavior in dynamic control is inherently trajectory-level, we introduce a receding-horizon predictive mechanism that models short-term future trajectories and provides look-ahead rewards during training. To account for the natural variability of human behavior under disturbances and changing conditions, we further condition the policy on reference trajectories, allowing it to represent a distribution of expert-consistent behaviors rather than a single deterministic target. Empirically, we evaluate the approach in high-fidelity race car simulation using data from professional drivers, a domain characterized by extreme dynamics and narrow performance margins. The learned policies achieve competitive lap times while maintaining close alignment with expert driving behavior, outperforming baseline methods in both performance and imitation quality. Beyond standard benchmarks, we conduct human-grounded evaluation in a driver-in-the-loop simulator and show that the learned policies reproduce setup-dependent driving characteristics consistent with the feedback of top-class professional race drivers. These results demonstrate that our method enables learning high-performance control policies that are both optimal and behavior-consistent, and can serve as reliable surrogates for human decision-making in complex control systems.