AI Navigate

Context Bootstrapped Reinforcement Learning

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • RLVR suffers from exploration inefficiency, especially in tasks requiring novel reasoning patterns or domain-specific knowledge.
  • Context Bootstrapped Reinforcement Learning (CBRL) augments RLVR by stochastically prepending few-shot demonstrations to training prompts with a curriculum that starts high and anneals to zero.
  • This approach forces the policy to internalize reasoning patterns rather than rely on demonstrations at test time, improving exploration efficiency and success rates across tasks.
  • CBRL is algorithm-agnostic and validated across two model families and five Reasoning Gym tasks, with practical applicability demonstrated on the domain-specific language Q.

Abstract

Reinforcement Learning from Verifiable Rewards (RLVR) suffers from exploration inefficiency, where models struggle to generate successful rollouts, resulting in minimal learning signal. This challenge is particularly severe for tasks that require the acquisition of novel reasoning patterns or domain-specific knowledge. To address this, we propose Context Bootstrapped Reinforcement Learning (CBRL), which augments RLVR training by stochastically prepending few-shot demonstrations to training prompts. The injection probability follows a curriculum that starts high to bootstrap early exploration, then anneals to zero so the model must ultimately succeed without assistance. This forces the policy to internalize reasoning patterns from the demonstrations rather than relying on them at test time. We validate CBRL across two model families and five Reasoning Gym tasks. Our results demonstrate that CBRL consistently improves success rate, provides better exploration efficiency, and is algorithm-agnostic. We further demonstrate CBRL's practical applicability on Q, a domain-specific programming language that diverges significantly from mainstream language conventions.