Learning from Imperfect Demonstrations via Temporal Behavior Tree-Guided Trajectory Repair
arXiv cs.LG / 4/7/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses a key limitation of learning from demonstrations: real-world trajectories are often noisy or violate task constraints, making them problematic for imitation and reinforcement learning.
- It introduces a formal trajectory-repair framework using Temporal Behavior Trees (TBT), which extend Signal Temporal Logic with Behavior Tree semantics to express temporal task requirements.
- When demonstrations violate a TBT specification, a model-based repair algorithm adjusts trajectory segments so the resulting dataset is logically consistent and interpretable.
- The repaired trajectories are then used to extract potential functions that shape the reward signal for reinforcement learning, steering agents toward task-consistent regions without needing explicit kinematic models.
- Experiments on grid-world navigation and continuous single-/multi-agent reach-avoid tasks show improved effectiveness and suggest better data efficiency in settings with imperfect demonstrations.
Related Articles

Meta's latest model is as open as Zuckerberg's private school
The Register

Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial
A Comprehensive Implementation Guide to ModelScope for Model Search, Inference, Fine-Tuning, Evaluation, and Export
MarkTechPost

Harness Engineering: The Next Evolution of AI Engineering
Dev.to