Measuring What Matters -- or What's Convenient?: Robustness of LLM-Based Scoring Systems to Construct-Irrelevant Factors

arXiv cs.CL / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study examines how construct-irrelevant factors affect a dual-architecture LLM-based automated scoring system for short, essay-like open-response items in a situational judgment test.
  • The system was found to be generally robust to meaningless padding, spelling errors, and variations in writing sophistication.
  • However, duplicating large passages of text led to systematically lower predicted scores on average, which runs counter to findings from prior research on non-LLM-based scoring systems.
  • Off-topic responses were heavily penalized, suggesting the approach can meaningfully detect and downweight irrelevant content when designed for construct relevance.
  • Overall, the findings support the robustness potential of future LLM-based scoring systems, while highlighting specific failure modes (e.g., text duplication) that warrant careful design and evaluation.

Abstract

Automated systems have been widely adopted across the educational testing industry for open-response assessment and essay scoring. These systems commonly achieve performance levels comparable to or superior than trained human raters, but have frequently been demonstrated to be vulnerable to the influence of construct-irrelevant factors (i.e., features of responses that are unrelated to the construct assessed) and adversarial conditions. Given the rising usage of large language models in automated scoring systems, there is a renewed focus on ``hallucinations'' and the robustness of these LLM-based automated scoring approaches to construct-irrelevant factors. This study investigates the effects of construct-irrelevant factors on a dual-architecture LLM-based scoring system designed to score short essay-like open-response items in a situational judgment test. It was found that the scoring system was generally robust to padding responses with meaningless text, spelling errors, and writing sophistication. Duplicating large passages of text resulted in lower scores predicted by the system, on average, contradicting results from previous studies of non-LLM-based scoring systems, while off-topic responses were heavily penalized by the scoring system. These results provide encouraging support for the robustness of future LLM-based scoring systems when designed with construct relevance in mind.