Aligning Multimodal Sequential Recommendations via Robust Direct Preference Optimization with Sparse MoE

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how Direct Preference Optimization (DPO) performs for multimodal sequential recommendation when implicit feedback makes unobserved items unreliable negatives.
  • It finds that replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool improves ranking consistently.
  • The improvement is attributed to reducing harmful gradients from false negatives while preserving useful hard-signal information and smoothing training through controlled randomness.
  • Using an optional sparse Mixture-of-Experts (MoE) encoder, the proposed RoDPO method reaches up to 5.25% NDCG@5 gains on three Amazon benchmarks with nearly unchanged inference cost.

Abstract

Preference-based alignment objectives have been widely adopted, from RLHF-style pairwise learning in large language models to emerging applications in recommender systems. Yet, existing work rarely examines how Direct Preference Optimization (DPO) behaves under implicit feedback, where unobserved items are not reliable negatives. We conduct systematic experiments on multimodal sequential recommendation to compare common negative-selection strategies and their interaction with DPO training. Our central finding is that a simple modification, replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool, consistently improves ranking performance. We attribute its effectiveness to two factors: (1) reducing erroneous suppressive gradients caused by false negatives, and (2) retaining informative hard signals while smoothing optimization via controlled stochasticity. With an optional sparse Mixture-of-Experts encoder for efficient capacity scaling, RoDPO achieves up to 5.25% NDCG@5 on three Amazon benchmarks, with nearly unchanged inference cost.