Offline-Online Reinforcement Learning for Linear Mixture MDPs

arXiv cs.LG / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates offline-online reinforcement learning for linear mixture Markov decision processes when the environment shifts between data collection and online interaction.
  • It introduces an algorithm that adaptively uses offline data when they are informative (e.g., sufficient coverage or small shift) to provably outperform online-only learning.
  • If the offline data are uninformative due to large mismatch, the method safely downweights them to match the performance of an online-only baseline.
  • The authors derive regret upper bounds and provide nearly matching lower bounds to precisely characterize the conditions under which offline data help.
  • Experiments support the theoretical analysis and show that the algorithm’s behavior aligns with predicted benefits vs. harmful offline mismatch.

Abstract

We study offline-online reinforcement learning in linear mixture Markov decision processes (MDPs) under environment shift. In the offline phase, data are collected by an unknown behavior policy and may come from a mismatched environment, while in the online phase the learner interacts with the target environment. We propose an algorithm that adaptively leverages offline data. When the offline data are informative, either due to sufficient coverage or small environment shift, the algorithm provably improves over purely online learning. When the offline data are uninformative, it safely ignores them and matches the online-only performance. We establish regret upper bounds that explicitly characterize when offline data are beneficial, together with nearly matching lower bounds. Numerical experiments further corroborate our theoretical findings.