Optimizing Neurorobot Policy under Limited Demonstration Data through Preference Regret

arXiv cs.RO / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles reinforcement learning from demonstrations (RLfD) under realistic conditions where expert data is scarce and collecting demonstrations is expensive.
  • It proposes the MYOE (“master your own expertise”) self-imitation framework to help robots learn complex skills from limited demonstration samples.
  • The method introduces a QMoP-SSM (queryable mixture-of-preferences state space model) that estimates time-step-level desired goals for the agent.
  • It computes “preference regret” from these desired goals and uses it to optimize the robot’s control policy, addressing issues from dataset shift/compounding imitation errors.
  • Experiments on neurorobotics show the approach is robust, adaptable, and performs well out-of-sample versus other state-of-the-art RLfD schemes, with code provided in an associated GitHub repository.

Abstract

Robot reinforcement learning from demonstrations (RLfD) assumes that expert data is abundant; this is usually unrealistic in the real world given data scarcity as well as high collection cost. Furthermore, imitation learning algorithms assume that the data is independently and identically distributed, which ultimately results in poorer performance as gradual errors emerge and compound within test-time trajectories. We address these issues by introducing the "master your own expertise" (MYOE) framework, a self-imitation framework that enables robotic agents to learn complex behaviors from limited demonstration data samples. Inspired by human perception and action, we propose and design what we call the queryable mixture-of-preferences state space model (QMoP-SSM), which estimates the desired goal at every time step. These desired goals are used in computing the "preference regret", which is used to optimize the robot control policy. Our experiments demonstrate the robustness, adaptability, and out-of-sample performance of our agent compared to other state-of-the-art RLfD schemes. The GitHub repository that supports this work can be found at: https://github.com/rxng8/neurorobot-preference-regret-learning.