Optimal Single-Policy Sample Complexity and Transient Coverage for Average-Reward Offline RL

arXiv stat.ML / 4/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes offline reinforcement learning in average-reward MDPs, focusing on distribution shift and non-uniform coverage challenges that prior theory work often under-addressed.
  • It derives the first fully single-policy sample-complexity bound for average-reward offline RL, depending only on the target policy via the bias span and a new policy hitting radius measure.
  • The authors extend guarantees to general weakly communicating MDPs, avoiding the restrictive structural assumptions used in earlier studies.
  • They propose a pessimistic discounted value iteration algorithm with a novel quantile clipping technique to obtain sharper, empirical-span-based penalties, and the method works without prior knowledge of key parameters.
  • The paper also proves (with hard examples) that successful learning needs coverage stronger than the target policy’s stationary distribution, and it provides matching (nearly) lower bounds to support the tightness of the result.

Abstract

We study offline reinforcement learning in average-reward MDPs, which presents increased challenges from the perspectives of distribution shift and non-uniform coverage, and has been relatively underexamined from a theoretical perspective. While previous work obtains performance guarantees under single-policy data coverage assumptions, such guarantees utilize additional complexity measures which are uniform over all policies, such as the uniform mixing time. We develop sharp guarantees depending only on the target policy, specifically the bias span and a novel policy hitting radius, yielding the first fully single-policy sample complexity bound for average-reward offline RL. We are also the first to handle general weakly communicating MDPs, contrasting restrictive structural assumptions made in prior work. To achieve this, we introduce an algorithm based on pessimistic discounted value iteration enhanced by a novel quantile clipping technique, which enables the use of a sharper empirical-span-based penalty function. Our algorithm also does not require any prior parameter knowledge for its implementation. Remarkably, we show via hard examples that learning under our conditions requires coverage assumptions beyond the stationary distribution of the target policy, distinguishing single-policy complexity measures from previously examined cases. We also develop lower bounds nearly matching our main result.