Random Is Hard to Beat: Active Selection in online DPO with Modern LLMs

arXiv cs.LG / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies Active Preference Learning (APL) for online Direct Preference Optimization (DPO) with modern LLMs and asks whether uncertainty-based sampling beats simple Random selection when pretraining priors are strong.
  • Across multiple evaluation dimensions—harmlessness, helpfulness, and instruction-following—using reward models and LLM-as-a-judge proxies, APL delivers negligible improvements in proxy win-rates over Random sampling.
  • The authors observe a dissociation where win-rate can improve while overall general capability (per standard benchmarks) degrades, indicating possible tradeoffs or misalignment between proxy judgments and broader quality.
  • APL does not substantially reduce variance or prevent “capability collapse” better than random sampling, even though it adds computational overhead for active selection.
  • The study concludes that, under strong pre-trained priors, the extra cost of active selection is hard to justify versus Random’s “cheap diversity,” and they release code publicly.

Abstract

Modern LLMs inherit strong priors from web-scale pretraining, which can limit the headroom of post-training data-selection strategies. While Active Preference Learning (APL) seeks to optimize query efficiency in online Direct Preference Optimization (DPO), the inherent richness of on-policy candidate pools often renders simple Random sampling a surprisingly formidable baseline. We evaluate uncertainty-based APL against Random across harmlessness, helpfulness, and instruction-following settings, utilizing both reward models and LLM-as-a-judge proxies. We find that APL yields negligible improvements in proxy win-rates compared to Random. Crucially, we observe a dissociation where win-rate improves even as general capability -- measured by standard benchmarks -- degrades. APL fails to mitigate this capability collapse or reduce variance significantly better than random sampling. Our findings suggest that in the regime of strong pre-trained priors, the computational overhead of active selection is difficult to justify against the ``cheap diversity'' provided by simple random samples. Our code is available at https://github.com/BootsofLagrangian/random-vs-apl.