Adaptive Querying with AI Persona Priors

arXiv cs.CL / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles adaptive querying under strict question budgets for learning user-specific targets like held-out-item responses and psychometric indicators.
  • It proposes a persona-induced latent variable model where a user’s state is represented by membership in a finite set of AI personas, each backed by response distributions from a large language model.
  • By using this finite-mixture persona model, the approach enables expressive Bayesian priors with closed-form posterior updates and efficient mixture-based predictions for sequential item selection.
  • Experiments on synthetic data and WorldValuesBench show the persona-based posterior can provide accurate probabilistic predictions and a more interpretable adaptive elicitation workflow than prior methods.
  • The main contribution is a scalable alternative to classical Bayesian design approaches that often require restrictive assumptions or costly posterior approximations, especially in heterogeneous and cold-start settings.

Abstract

We study adaptive querying for learning user-dependent quantities of interest, such as responses to held-out items and psychometric indicators, within tight question budgets. Classical Bayesian design and computerized adaptive testing typically rely on restrictive parametric assumptions or expensive posterior approximations, limiting their use in heterogeneous, high-dimensional, and cold-start settings. We introduce a persona-induced latent variable model that represents a user's state through membership in a finite dictionary of AI personas, each offering response distributions produced by a large language model. This yields expressive priors with closed-form posterior updates and efficient finite-mixture predictions, enabling scalable Bayesian design for sequential item selection. Experiments on synthetic data and WorldValuesBench demonstrate that persona-based posteriors deliver accurate probabilistic predictions and an interpretable adaptive elicitation pipeline.