Efficient Preference Poisoning Attack on Offline RLHF

arXiv stat.ML / 5/5/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how offline RLHF pipelines that train on pre-collected preference datasets can be exploited by preference poisoning (specifically label-flip attacks) against log-linear DPO.
  • It shows that flipping a single preference label causes a parameter-independent shift in the DPO gradient, which allows the targeted poisoning task to be reformulated as a structured sparse approximation problem.
  • The authors propose two attack algorithms, BAL-A and BMP-A, each designed to recover minimum-flip or bounded-flip objectives under binary flip constraints.
  • The approach provides theoretical recovery guarantees and robustness/impossibility certificates for given K-flip budgets, demonstrating how dictionary geometry affects whether attacks succeed.
  • Experiments on synthetic settings and the Stanford Human Preferences dataset validate the theory and illustrate the practical implications of the proposed attack formulations.

Abstract

Offline Reinforcement Learning from Human Feedback (RLHF) pipelines such as Direct Preference Optimization (DPO) train on a pre-collected preference dataset, which makes them vulnerable to preference poisoning attack. We study label flip attacks against log-linear DPO. We first illustrate that flipping one preference label induces a parameter-independent shift in the DPO gradient. Using this key property, we can then convert the targeted poisoning problem into a structured binary sparse approximation problem. To solve this problem, we develop two attack methods: Binary-Aware Lattice Attack (BAL-A) and Binary Matching Pursuit Attack (BMP-A). BAL-A embeds the binary flip selection problem into a binary-aware lattice and applies Lenstra-Lenstra-Lov\'asz reduction and Babai's nearest plane algorithm; we provide sufficient conditions that enforce binary coefficients and recover the minimum-flip objective. BMP-A adapts binary matching pursuit to our non-normalized gradient dictionary and yields coherence-based recovery guarantees and robustness (impossibility) certificates for K-flip budgets. Experiments on synthetic dictionaries and the Stanford Human Preferences dataset validate the theory and highlight how dictionary geometry governs attack success.