Rainbow-DemoRL: Combining Improvements in Demonstration-Augmented Reinforcement Learning

arXiv cs.RO / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies demonstration-augmented online reinforcement learning by comparing several ways to use offline demonstrations, including direct transition reuse, offline pretraining, and reference-action/value approaches.
  • It proposes a taxonomy of existing demonstration-augmented RL methods and runs a broad set of empirical experiments to measure their individual contributions to online sample efficiency.
  • The findings show that directly reusing offline data and using behavior cloning initialization reliably yields better online sample efficiency than more complex offline RL pretraining pipelines.
  • The study also evaluates whether these strategies can be effectively combined, identifying hybrid combinations that deliver cumulative benefits for sample-efficient online RL.

Abstract

Several approaches have been proposed to improve the sample efficiency of online reinforcement learning (RL) by leveraging demonstrations collected offline. The offline data can be used directly as transitions to optimize RL objectives, or offline policy and value functions can first be learned from the data and then used for online finetuning or to provide reference actions. While each of these strategies has shown compelling results, it is unclear which method has the most impact on sample efficiency, whether these approaches can be combined, and if there are cumulative benefits. We classify existing demonstration-augmented RL approaches into three categories and perform an extensive empirical study of their strengths, weaknesses, and combinations to isolate the contribution of each strategy and determine effective hybrid combinations for sample-efficient online RL. Our analysis reveals that directly reusing offline data and initializing with behavior cloning consistently outperform more complex offline RL pretraining methods for improving online sample efficiency.