Reducing Oracle Feedback with Vision-Language Embeddings for Preference-Based RL

arXiv cs.LG / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Preference-based reinforcement learning is often limited by the high cost of oracle feedback needed to learn reward functions from comparisons.
  • The paper proposes ROVED, a hybrid approach that uses lightweight vision-language embeddings to create segment-level preferences while routing only high-uncertainty samples to an oracle for targeted supervision.
  • ROVED adds a parameter-efficient fine-tuning strategy so the VLE is progressively adapted using the oracle feedback, improving performance over time without losing scalability.
  • Experiments on multiple robotic manipulation tasks show ROVED matches or exceeds prior methods while cutting oracle queries by up to 80% and delivering cumulative annotation savings of up to 90% via cross-task generalization of the adapted VLE.

Abstract

Preference-based reinforcement learning can learn effective reward functions from comparisons, but its scalability is constrained by the high cost of oracle feedback. Lightweight vision-language embedding (VLE) models provide a cheaper alternative, but their noisy outputs limit their effectiveness as standalone reward generators. To address this challenge, we propose ROVED, a hybrid framework that combines VLE-based supervision with targeted oracle feedback. Our method uses the VLE to generate segment-level preferences and defers to an oracle only for samples with high uncertainty, identified through a filtering mechanism. In addition, we introduce a parameter-efficient fine-tuning method that adapts the VLE with the obtained oracle feedback in order to improve the model over time in a synergistic fashion. This ensures the retention of the scalability of embeddings and the accuracy of oracles, while avoiding their inefficiencies. Across multiple robotic manipulation tasks, ROVED matches or surpasses prior preference-based methods while reducing oracle queries by up to 80%. Remarkably, the adapted VLE generalizes across tasks, yielding cumulative annotation savings of up to 90%, highlighting the practicality of combining scalable embeddings with precise oracle supervision for preference-based RL.