Causal Direct Preference Optimization for Distributionally Robust Generative Recommendation

arXiv cs.AI / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that Direct Preference Optimization (DPO) for generative recommendation can amplify spurious correlations from environmental confounders, reducing out-of-distribution (OOD) generalization.
  • It proposes CausalDPO, which extends DPO with causal invariance learning, including backdoor adjustment, soft clustering of latent environment distributions, and invariance constraints.
  • The authors provide theoretical arguments that CausalDPO better captures users’ stable preference structures across multiple environments.
  • Experiments across four distribution-shift scenarios show an average improvement of 17.17% over four evaluation metrics, supporting the method’s effectiveness for robust recommendation.

Abstract

Direct Preference Optimization (DPO) guides large language models (LLMs) to generate recommendations aligned with user historical behavior distributions by minimizing preference alignment loss. However, our systematic empirical research and theoretical analysis reveal that DPO tends to amplify spurious correlations caused by environmental confounders during the alignment process, significantly undermining the generalization capability of LLM-based generative recommendation methods in out of distribution (OOD) scenarios. To mitigate this issue, we propose CausalDPO, an extension of DPO that incorporates a causal invariance learning mechanism. This method introduces a backdoor adjustment strategy during the preference alignment phase to eliminate interference from environmental confounders, explicitly models the latent environmental distribution using a soft clustering approach, and enhances robust consistency across diverse environments through invariance constraints. Theoretical analysis demonstrates that CausalDPO can effectively capture users stable preference structures across multiple environments, thereby improving the OOD generalization performance of LLM-based recommendation models. We conduct extensive experiments under four representative distribution shift settings to validate the effectiveness of CausalDPO, achieving an average performance improvement of 17.17% across four evaluation metrics.