DynamicPO: Dynamic Preference Optimization for Recommendation

arXiv cs.AI / 5/4/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The paper shows that in LLM-based recommendation, increasing the number of negative samples in direct preference optimization (DPO) can paradoxically degrade performance even as training loss keeps decreasing.
  • It attributes this “preference optimization collapse” to gradient suppression, where overly easy negatives dominate and boundary-critical negatives that define user preference boundaries receive too little optimization.
  • To address the issue, the authors propose DynamicPO, a plug-and-play framework with Dynamic Boundary Negative Selection to prioritize informative near-boundary negatives.
  • DynamicPO also introduces Dual-Margin Dynamic beta Adjustment to vary optimization strength per sample based on boundary ambiguity.
  • Experiments on three public datasets indicate DynamicPO prevents optimization collapse and improves recommendation accuracy with negligible computational overhead, and the code is publicly released.

Abstract

In large language model (LLM)-based recommendation systems, direct preference optimization (DPO) effectively aligns recommendations with user preferences, requiring multi-negative objective functions to leverage abundant implicit-feedback negatives and sharpen preference boundaries. However, our empirical analyses reveal a counterintuitive phenomenon, preference optimization collapse, where increasing the number of negative samples can lead to performance degradation despite a continuously decreasing training loss. We further theoretically demonstrate that this collapse arises from gradient suppression, caused by the dominance of easily discriminable negatives over boundary-critical negatives that truly define user preference boundaries. As a result, boundary-relevant signals are under-optimized, weakening the model's decision boundary. Motivated by these observations, we propose DynamicPO (Dynamic Preference Optimization), a lightweight and plug-and-play framework comprising two adaptive mechanisms: Dynamic Boundary Negative Selection, which identifies and prioritizes informative negatives near the model's decision boundary, and Dual-Margin Dynamic beta Adjustment, which calibrates optimization strength per sample according to boundary ambiguity. Extensive experiments on three public datasets show that DynamicPO effectively prevents optimization collapse and improves recommendation accuracy on multi-negative preference optimization methods, with negligible computational overhead. Our code and datasets are available at https://github.com/xingyuHuxingyu/DynamicPO.