Many Preferences, Few Policies: Towards Scalable Language Model Personalization

arXiv cs.CL / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that true per-user LLM personalization is too costly, and proposes instead using a small portfolio of LLMs covering the variety of user preferences.
  • It introduces a multi-trait preference model (e.g., safety, humor, brevity) represented by a weight vector, and uses reward functions across these dimensions to define personalized objectives.
  • The proposed algorithm, PALM (Portfolio of Aligned LLMs), selects a small set of LLMs such that for any preference weight vector the portfolio includes a near-optimal model for the corresponding scalarized goal.
  • The work claims theoretical guarantees on both portfolio size and approximation quality, explicitly characterizing the cost–personalization trade-off and the required diversity of models.
  • Experiments are reported to validate the theoretical results and show improved output diversity compared with common baselines.

Abstract

The holy grail of LLM personalization is a single LLM for each user, perfectly aligned with that user's preferences. However, maintaining a separate LLM per user is impractical due to constraints on compute, memory, and system complexity. We address this challenge by developing a principled method for selecting a small portfolio of LLMs that captures representative behaviors across heterogeneous users. We model user preferences across multiple traits (e.g., safety, humor, brevity) through a multi-dimensional weight vector. Given reward functions across these dimensions, our algorithm PALM (Portfolio of Aligned LLMs) generates a small portfolio of LLMs such that, for any weight vector, the portfolio contains a near-optimal LLM for the corresponding scalarized objective. To the best of our knowledge, this is the first result that provides theoretical guarantees on both the size and approximation quality of LLM portfolios for personalization. It characterizes the trade-off between system cost and personalization, as well as the diversity of LLMs required to cover the landscape of user preferences. We provide empirical results that validate these guarantees and demonstrate greater output diversity over common baselines.