AI Navigate

FlexRec: Adapting LLM-based Recommenders for Flexible Needs via Reinforcement Learning

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that recommender systems must adapt to dynamic, need-specific objectives and explores using RL-based post-training of LLMs to align recommendations with complex goals.
  • It identifies two main obstacles for RL in closed-set autoregressive ranking: coarse credit assignment from sequence-level rewards and sparse, noisy interaction feedback.
  • FlexRec proposes a causally grounded item-level reward based on counterfactual swaps within the remaining candidate pool and a critic-guided, uncertainty-aware reward scaling to stabilize learning.
  • Empirically, FlexRec delivers substantial gains, with up to 59% NDCG@5 and 109.4% Recall@5 improvements in need-specific ranking and up to 24.1% Recall@5 gains in generalization, outperforming strong baselines.

Abstract

Modern recommender systems must adapt to dynamic, need-specific objectives for diverse recommendation scenarios, yet most traditional recommenders are optimized for a single static target and struggle to reconfigure behavior on demand. Recent advances in reinforcement-learning-based post-training have unlocked strong instruction-following and reasoning capabilities in LLMs, suggesting a principled route for aligning them to complex recommendation goals. Motivated by this, we study closed-set autoregressive ranking, where an LLM generates a permutation over a fixed candidate set conditioned on user context and an explicit need instruction. However, applying RL to this setting faces two key obstacles: (i) sequence-level rewards yield coarse credit assignment that fails to provide fine-grained training signals, and (ii) interaction feedback is sparse and noisy, which together lead to inefficient and unstable updates. We propose FlexRec, a post-training RL framework that addresses both issues with (1) a causally grounded item-level reward based on counterfactual swaps within the remaining candidate pool, and (2) critic-guided, uncertainty-aware scaling that explicitly models reward uncertainty and down-weights low-confidence rewards to stabilize learning under sparse supervision. Across diverse recommendation scenarios and objectives, FlexRec achieves substantial gains: it improves NDCG@5 by up to \textbf{59\%} and Recall@5 by up to \textbf{109.4\%} in need-specific ranking, and further achieves up to \textbf{24.1\%} Recall@5 improvement under generalization settings, outperforming strong traditional recommenders and LLM-based baselines.