Uncertainty-Aware Variational Reward Factorization via Probabilistic Preference Bases for LLM Personalization

arXiv cs.CL / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Variational Reward Factorization (VRF) to improve LLM reward-factorization personalization by modeling user preferences probabilistically rather than as deterministic weights estimated from limited data.
  • VRF learns user-specific variational distributions in a shared preference space using a variational encoder, then matches them to shared probabilistic basis functions via Wasserstein distance to obtain more reliable weights.
  • It reduces the impact of uncertain user inferences through a variance-attenuated loss, aiming to make personalization robust when user data is scarce or noisy.
  • Experiments on three benchmarks show VRF outperforming prior methods for both seen and unseen users, across few-shot settings and different uncertainty levels, with improvements carrying over to downstream alignment tasks.

Abstract

Reward factorization personalizes large language models (LLMs) by decomposing rewards into shared basis functions and user-specific weights. Yet, existing methods estimate user weights from scarce data in isolation and as deterministic points, leading to inaccurate and unreliable inference. We introduce Variational Reward Factorization (VRF), an uncertainty-aware framework that represents each user's preferences as a variational distribution in a shared preference space. VRF infers user distributions via a variational encoder, derives weights through Wasserstein distance matching with shared probabilistic bases, and downweights uncertain estimates through a variance-attenuated loss. On three benchmarks, VRF outperforms all baselines across seen and unseen users, few-shot scenarios, and varying uncertainty levels, with gains extending to downstream alignment.