GroupDPO: Memory efficient Group-wise Direct Preference Optimization

arXiv cs.CL / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes GroupDPO, a memory-efficient algorithm for group-wise Direct Preference Optimization that addresses the scalability limits of earlier group-coupled objectives.
  • It improves training by decoupling samples during backpropagation while preserving gradients, significantly reducing peak GPU memory usage and allowing larger groups of candidate responses.
  • Experiments in both offline and online alignment settings show that using multiple responses per prompt performs better than training on a single positive-negative pair.
  • The authors find that adding a negative log-likelihood (NLL) term on positive responses is essential for both improved performance and more stable training.

Abstract

Preference optimization is widely used to align Large Language Models (LLMs) with preference feedback. However, most existing methods train on a single positive-negative pair per prompt, discarding additional supervision available in preference datasets that typically contain multiple candidate responses. Motivated by this limitation, recent work explores group-wise preference optimization, which jointly contrasts multiple responses for the same prompt, but its empirical behavior and scalability remain underexplored due to the memory overhead of group-coupled objectives. In this work, we introduce a memory-efficient group-wise preference optimization algorithm that preserves gradients while decoupling samples during backpropagation, substantially reducing peak memory usage, which enables scalable training with larger group sizes. Across both offline and online alignment settings, we show that leveraging multiple responses consistently outperforms single-pair training. Furthermore, incorporating a negative log-likelihood (NLL) term on positive responses is critical for both performance gains and training stability.