From Feature-Based Models to Generative AI: Validity Evidence for Constructed Response Scoring

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that rapid advances in large language models are enabling broader use of generative AI in high-stakes constructed-response scoring, potentially outperforming traditional feature-based approaches and reducing the need for handcrafted features.
  • It compares validity evidence requirements across human ratings, feature-based NLP scoring, and generative AI scoring, noting that generative AI demands more extensive validation due to transparency and consistency concerns.
  • The authors propose best practices for collecting validity evidence to support the use and interpretation of scores produced by generative AI scoring systems.
  • Using a large corpus of argumentative essays from grades 6-12, the study demonstrates how validity evidence can be collected for different scoring systems and highlights the complexities involved in making validity arguments for generative AI–based scores.

Abstract

The rapid advancements in large language models and generative artificial intelligence (AI) capabilities are making their broad application in the high-stakes testing context more likely. Use of generative AI in the scoring of constructed responses is particularly appealing because it reduces the effort required for handcrafting features in traditional AI scoring and might even outperform those methods. The purpose of this paper is to highlight the differences in the feature-based and generative AI applications in constructed response scoring systems and propose a set of best practices for the collection of validity evidence to support the use and interpretation of constructed response scores from scoring systems using generative AI. We compare the validity evidence needed in scoring systems using human ratings, feature-based natural language processing AI scoring engines, and generative AI. The evidence needed in the generative AI context is more extensive than in the feature-based scoring context because of the lack of transparency and other concerns unique to generative AI such as consistency. Constructed response score data from a large corpus of independent argumentative essays written by 6-12th grade students demonstrate the collection of validity evidence for different types of scoring systems and highlight the numerous complexities and considerations when making a validity argument for these scores.