Learning from Imperfect Demonstrations via Temporal Behavior Tree-Guided Trajectory Repair

arXiv cs.LG / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key limitation of learning from demonstrations: real-world trajectories are often noisy or violate task constraints, making them problematic for imitation and reinforcement learning.
  • It introduces a formal trajectory-repair framework using Temporal Behavior Trees (TBT), which extend Signal Temporal Logic with Behavior Tree semantics to express temporal task requirements.
  • When demonstrations violate a TBT specification, a model-based repair algorithm adjusts trajectory segments so the resulting dataset is logically consistent and interpretable.
  • The repaired trajectories are then used to extract potential functions that shape the reward signal for reinforcement learning, steering agents toward task-consistent regions without needing explicit kinematic models.
  • Experiments on grid-world navigation and continuous single-/multi-agent reach-avoid tasks show improved effectiveness and suggest better data efficiency in settings with imperfect demonstrations.

Abstract

Learning robot control policies from demonstrations is a powerful paradigm, yet real-world data is often suboptimal, noisy, or otherwise imperfect, posing significant challenges for imitation and reinforcement learning. In this work, we present a formal framework that leverages Temporal Behavior Trees (TBT), an extension of Signal Temporal Logic (STL) with Behavior Tree semantics, to repair suboptimal trajectories prior to their use in downstream policy learning. Given demonstrations that violate a TBT specification, a model-based repair algorithm corrects trajectory segments to satisfy the formal constraints, yielding a dataset that is both logically consistent and interpretable. The repaired trajectories are then used to extract potential functions that shape the reward signal for reinforcement learning, guiding the agent toward task-consistent regions of the state space without requiring knowledge of the agent's kinematic model. We demonstrate the effectiveness of this framework on discrete grid-world navigation and continuous single and multi-agent reach-avoid tasks, highlighting its potential for data-efficient robot learning in settings where high-quality demonstrations cannot be assumed.