AI Navigate

Prompt Engineering for Scale Development in Generative Psychometrics

arXiv cs.AI / 3/18/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The Monte Carlo study examines how prompt engineering strategies (zero-shot, few-shot, persona-based, and adaptive prompting) influence the quality of LLM-generated personality assessment items within the AI-GENIE generative psychometrics framework.
  • Adaptive prompting consistently outperforms non-adaptive designs by reducing semantic redundancy, improving pre-reduction structural validity, and preserving larger item pools, especially with newer, higher-capacity models.
  • The gains are robust across temperature settings for most models, though GPT-4o shows a model-specific sensitivity to adaptive constraints at high temperatures.
  • Prompt design significantly affects both pre- and post-reduction item quality, with the strongest benefits observed when adaptive prompting is paired with high-capability models.

Abstract

This Monte Carlo simulation examines how prompt engineering strategies shape the quality of large language model (LLM)--generated personality assessment items within the AI-GENIE framework for generative psychometrics. Item pools targeting the Big Five traits were generated using multiple prompting designs (zero-shot, few-shot, persona-based, and adaptive), model temperatures, and LLMs, then evaluated and reduced using network psychometric methods. Across all conditions, AI-GENIE reliably improved structural validity following reduction, with the magnitude of its incremental contribution inversely related to the quality of the incoming item pool. Prompt design exerted a substantial influence on both pre- and post-reduction item quality. Adaptive prompting consistently outperformed non-adaptive strategies by sharply reducing semantic redundancy, elevating pre-reduction structural validity, and preserving substantially larger item pool, particularly when paired with newer, higher-capacity models. These gains were robust across temperature settings for most models, indicating that adaptive prompting mitigates common trade-offs between creativity and psychometric coherence. An exception was observed for the GPT-4o model at high temperatures, suggesting model-specific sensitivity to adaptive constraints at elevated stochasticity. Overall, the findings demonstrate that adaptive prompting is the strongest approach in this context, and that its benefits scale with model capability, motivating continued investigation of model--prompt interactions in generative psychometric pipelines.