OPRIDE: Offline Preference-based Reinforcement Learning via In-Dataset Exploration

arXiv cs.AI / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces OPRIDE, an offline preference-based reinforcement learning method intended to improve query efficiency when human preference feedback is costly.
  • It identifies two main causes of low query efficiency in offline PbRL—inefficient exploration and overoptimization of learned reward functions—and addresses both directly in the proposed algorithm.
  • OPRIDE uses a principled in-dataset exploration strategy to make preference queries more informative and incorporates a discount scheduling mechanism to reduce reward overfitting/overoptimization.
  • Experiments across locomotion, manipulation, and navigation tasks show that OPRIDE achieves stronger performance than prior methods while requiring substantially fewer queries.
  • The authors also provide theoretical efficiency guarantees, strengthening the case for OPRIDE as a more reliable and scalable approach for offline PbRL.

Abstract

Preference-based reinforcement learning (PbRL) can help avoid sophisticated reward designs and align better with human intentions, showing great promise in various real-world applications. However, obtaining human feedback for preferences can be expensive and time-consuming, which forms a strong barrier for PbRL. In this work, we address the problem of low query efficiency in offline PbRL, pinpointing two primary reasons: inefficient exploration and overoptimization of learned reward functions. In response to these challenges, we propose a novel algorithm, \textbf{O}ffline \textbf{P}b\textbf{R}L via \textbf{I}n-\textbf{D}ataset \textbf{E}xploration (OPRIDE), designed to enhance the query efficiency of offline PbRL. OPRIDE consists of two key features: a principled exploration strategy that maximizes the informativeness of the queries and a discount scheduling mechanism aimed at mitigating overoptimization of the learned reward functions. Through empirical evaluations, we demonstrate that OPRIDE significantly outperforms prior methods, achieving strong performance with notably fewer queries. Moreover, we provide theoretical guarantees of the algorithm's efficiency. Experimental results across various locomotion, manipulation, and navigation tasks underscore the efficacy and versatility of our approach.