AI Navigate

Dynamics-Predictive Sampling for Active RL Finetuning of Large Reasoning Models

arXiv cs.LG / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • DPS proposes online dynamics-predictive sampling to select informative prompts for RL finetuning of large reasoning models by forecasting their learning dynamics before expensive rollouts.
  • It models each prompt's solving progress as a dynamical system using a hidden Markov model and uses online Bayesian inference on historical rewards to generate a predictive prior for sampling.
  • The approach aims to substantially reduce redundant LLM rollouts, accelerate training, and improve reasoning performance across tasks such as mathematics, planning, and visual geometry.
  • Empirical results show DPS lowers rollout cost while achieving superior reasoning capabilities, indicating potential workflow improvements for RL finetuning pipelines.

Abstract

Reinforcement learning (RL) finetuning has become a key technique for enhancing the reasoning abilities of large language models (LLMs). However, its effectiveness critically depends on the selection of training data. Recent advances underscore the importance of online prompt selection methods, which typically concentrate training on partially solved or moderately challenging examples under the current policy, thereby yielding more effective model updates. While significantly accelerating RL finetuning in terms of training steps, they also incur substantial computational overhead by requiring extensive LLM rollouts over large candidate batches to identify informative samples, an expense that can outweigh the finetuning process itself. To address this challenge, this work proposes Dynamics-Predictive Sampling (DPS), which online predicts and selects informative prompts by inferring their learning dynamics prior to costly rollouts. Specifically, we introduce a new perspective by modeling each prompt's solving progress during RL finetuning as a dynamical system, where the extent of solving is represented as the state and the transition is characterized by a hidden Markov model. Using historical rollout reward signals, we perform online Bayesian inference to estimate evolving state distributions, and the inference outcome provides a predictive prior for efficient prompt selection without rollout-intensive filtering. Empirical results across diverse reasoning tasks, including mathematics, planning, and visual geometry, demonstrate that DPS substantially reduces redundant rollouts, accelerates the training process, and achieves superior reasoning performance.