Optimal Single-Policy Sample Complexity and Transient Coverage for Average-Reward Offline RL
arXiv stat.ML / 4/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper analyzes offline reinforcement learning in average-reward MDPs, focusing on distribution shift and non-uniform coverage challenges that prior theory work often under-addressed.
- It derives the first fully single-policy sample-complexity bound for average-reward offline RL, depending only on the target policy via the bias span and a new policy hitting radius measure.
- The authors extend guarantees to general weakly communicating MDPs, avoiding the restrictive structural assumptions used in earlier studies.
- They propose a pessimistic discounted value iteration algorithm with a novel quantile clipping technique to obtain sharper, empirical-span-based penalties, and the method works without prior knowledge of key parameters.
- The paper also proves (with hard examples) that successful learning needs coverage stronger than the target policy’s stationary distribution, and it provides matching (nearly) lower bounds to support the tightness of the result.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to