AI Navigate

Efficient Exploration at Scale

arXiv cs.LG / 3/19/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents an online learning algorithm that significantly improves data efficiency for reinforcement learning from human feedback (RLHF) by incrementally updating both the reward model and the language model as new choice data arrives.
  • Key techniques include a small affirmative nudge added to each reinforcement signal, an epistemic neural network that models reward uncertainty, and information-directed exploration to guide data collection.
  • In experiments with Gemma LLMs, the algorithm matches offline RLHF performance trained on 200k labels using fewer than 20k labels, representing more than a 10x improvement in data efficiency.
  • The authors project that training on 1M labels could match offline RLHF trained on 1B labels, implying a 1000x scaling advantage and potentially transformative gains for RLHF pipelines.

Abstract

We develop an online learning algorithm that dramatically improves the data efficiency of reinforcement learning from human feedback (RLHF). Our algorithm incrementally updates reward and language models as choice data is received. The reward model is fit to the choice data, while the language model is updated by a variation of reinforce, with reinforcement signals provided by the reward model. Several features enable the efficiency gains: a small affirmative nudge added to each reinforcement signal, an epistemic neural network that models reward uncertainty, and information-directed exploration. With Gemma large language models (LLMs), our algorithm matches the performance of offline RLHF trained on 200K labels using fewer than 20K labels, representing more than a 10x gain in data efficiency. Extrapolating from our results, we expect our algorithm trained on 1M labels to match offline RLHF trained on 1B labels. This represents a 1,000x gain. To our knowledge, these are the first results to demonstrate that such large improvements are possible.