Distributional Off-Policy Evaluation with Deep Quantile Process Regression

arXiv stat.ML / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reframes off-policy evaluation (OPE) by targeting the entire return distribution rather than only the expected return.
  • It proposes a quantile-based OPE method using deep quantile process regression, introducing the DQPOPE (Deep Quantile Process regression-based Off-Policy Evaluation) algorithm.
  • The authors extend deep quantile process regression from discrete-quantile estimation to continuous quantile function estimation, along with new theoretical results.
  • They provide a rigorous sample-complexity analysis for distributional OPE with deep neural networks and argue DQPOPE can estimate full distributions with sample sizes comparable to conventional single-policy-value estimation.
  • Experiments indicate that DQPOPE yields more precise and robust policy value estimates than standard OPE methods, improving the practical usefulness of distributional reinforcement learning.

Abstract

This paper investigates the off-policy evaluation (OPE) problem from a distributional perspective. Rather than focusing solely on the expectation of the total return, as in most existing OPE methods, we aim to estimate the entire return distribution. To this end, we introduce a quantile-based approach for OPE using deep quantile process regression, presenting a novel algorithm called Deep Quantile Process regression-based Off-Policy Evaluation (DQPOPE). We provide new theoretical insights into the deep quantile process regression technique, extending existing approaches that estimate discrete quantiles to estimate a continuous quantile function. A key contribution of our work is the rigorous sample complexity analysis for distributional OPE with deep neural networks, bridging theoretical analysis with practical algorithmic implementations. We show that DQPOPE achieves statistical advantages by estimating the full return distribution using the same sample size required to estimate a single policy value using conventional methods. Empirical studies further show that DQPOPE provides significantly more precise and robust policy value estimates than standard methods, thereby enhancing the practical applicability and effectiveness of distributional reinforcement learning approaches.