Provably Efficient Offline-to-Online Value Adaptation with General Function Approximation

arXiv cs.LG / 4/16/2026

📰 News

Key Points

  • The paper studies offline-to-online reinforcement learning for value adaptation when starting from an imperfect offline-pretrained Q-function and using only limited online interaction.

Abstract

We study value adaptation in offline-to-online reinforcement learning under general function approximation. Starting from an imperfect offline pretrained Q-function, the learner aims to adapt it to the target environment using only a limited amount of online interaction. We first characterize the difficulty of this setting by establishing a minimax lower bound, showing that even when the pretrained Q-function is close to optimal Q^\star, online adaptation can be no more efficient than pure online RL on certain hard instances. On the positive side, under a novel structural condition on the offline-pretrained value functions, we propose O2O-LSVI, an adaptation algorithm with problem-dependent sample complexity that provably improves over pure online RL. Finally, we complement our theory with neural-network experiments that demonstrate the practical effectiveness of the proposed method.

Provably Efficient Offline-to-Online Value Adaptation with General Function Approximation | AI Navigate