Personalizing Text-to-Image Generation to Individual Taste

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that modern text-to-image models optimize for average aesthetic appeal but do not capture the subjective nature of individual taste.
  • It introduces PAMELA, a new dataset of 70,000 user ratings covering 5,000 images from state-of-the-art T2I generators (Flux 2 and Nano Banana), with each image evaluated by 15 users across domains like art, fashion, and cinematic photography.
  • The authors propose a personalized reward model that is trained using PAMELA alongside existing aesthetic assessment data, aiming to predict individual image preference more accurately than current population-level approaches.
  • Experiments show that the personalized predictor enables prompt optimization to steer generations toward a specific user’s preferences.
  • The dataset and model are released to support standardized research on personalized T2I alignment and subjective visual quality assessment.

Abstract

Modern text-to-image (T2I) models generate high-fidelity visuals but remain indifferent to individual user preferences. While existing reward models optimize for "average" human appeal, they fail to capture the inherent subjectivity of aesthetic judgment. In this work, we introduce a novel dataset and predictive framework, called PAMELA, designed to model personalized image evaluations. Our dataset comprises 70,000 ratings across 5,000 diverse images generated by state-of-the-art models (Flux 2 and Nano Banana). Each image is evaluated by 15 unique users, providing a rich distribution of subjective preferences across domains such as art, design, fashion, and cinematic photography. Leveraging this data, we propose a personalized reward model trained jointly on our high-quality annotations and existing aesthetic assessment subsets. We demonstrate that our model predicts individual liking with higher accuracy than the majority of current state-of-the-art methods predict population-level preferences. Using our personalized predictor, we demonstrate how simple prompt optimization methods can be used to steer generations towards individual user preferences. Our results highlight the importance of data quality and personalization to handle the subjectivity of user preferences. We release our dataset and model to facilitate standardized research in personalized T2I alignment and subjective visual quality assessment.