AI Navigate

AI Psychometrics: Evaluating the Psychological Reasoning of Large Language Models with Psychometric Validities

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • AI Psychometrics applies psychometric validity frameworks to evaluate the psychological reasoning of large language models, proposing a systematic evaluation approach.
  • The study assesses GPT-3.5, GPT-4, LLaMA-2, and LLaMA-3 using the Technology Acceptance Model to test convergent, discriminant, predictive, and external validity.
  • All four models meet the validity criteria, with GPT-4 and LLaMA-3 showing higher psychometric validity than GPT-3.5 and LLaMA-2.
  • The findings support the viability of applying AI Psychometrics to interpret LLMs and enable cross-model comparisons of psychological traits.
  • The work contributes to AI evaluation methodology by linking model performance with psychometric validity, suggesting new directions for model assessment.

Abstract

The immense number of parameters and deep neural networks make large language models (LLMs) rival the complexity of human brains, which also makes them opaque ``black box'' systems that are challenging to evaluate and interpret. AI Psychometrics is an emerging field that aims to tackle these challenges by applying psychometric methodologies to evaluate and interpret the psychological traits and processes of artificial intelligence (AI) systems. This paper investigates the application of AI Psychometrics to evaluate the psychological reasoning and overall psychometric validity of four prominent LLMs: GPT-3.5, GPT-4, LLaMA-2, and LLaMA-3. Using the Technology Acceptance Model (TAM), we examined convergent, discriminant, predictive, and external validity across these models. Our findings reveal that the responses from all these models generally met all validity criteria. Moreover, higher-performing models like GPT-4 and LLaMA-3 consistently demonstrated superior psychometric validity compared to their predecessors, GPT-3.5 and LLaMA-2. These results help to establish the validity of applying AI Psychometrics to evaluate and interpret large language models.