Designing Reliable LLM-Assisted Rubric Scoring for Constructed Responses: Evidence from Physics Exams

arXiv cs.AI / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study evaluates the reliability of AI-assisted rubric scoring for handwritten undergraduate physics responses using GPT-4o, comparing results with instructor ratings across two scoring rounds.
  • Human–AI agreement on total scores was similar to human inter-rater reliability overall, but agreement dropped for mid-level performances where reasoning is partial or ambiguous.
  • Criterion-level results showed stronger alignment for clearly defined conceptual skills than for longer, more subjective procedural judgments.
  • A more fine-grained, checklist-style skill rubric improved scoring consistency compared with holistic rubrics, indicating rubric structure is the primary driver of reliability.
  • Systematic tests found prompting format had a secondary effect and the model temperature had relatively limited impact, yielding practical recommendations for implementing reliable LLM-assisted STEM scoring.

Abstract

Student responses in STEM assessments are often handwritten and combine symbolic expressions, calculations, and diagrams, creating substantial variation in format and interpretation. Despite their importance for evaluating students' reasoning, such responses are time-consuming to score and prone to rater inconsistency, particularly when partial credit is required. Recent advances in large language models (LLMs) have increased attention to AI-assisted scoring, yet evidence remains limited regarding how rubric design and LLM configurations influence reliability across performance levels. This study examined the reliability of AI-assisted scoring of undergraduate physics constructed responses using GPT-4o. Twenty authentic handwritten exam responses were scored across two rounds by four instructors and by the AI model using skill-based rubrics with differing levels of analytic granularity. Prompting format and temperature settings were systematically varied. Overall, human-AI agreement on total scores was comparable to human inter-rater reliability and was highest for high- and low-performing responses, but declined for mid-level responses involving partial or ambiguous reasoning. Criterion-level analyses showed stronger alignment for clearly defined conceptual skills than for extended procedural judgments. A more fine-grained, checklist-based rubric improved consistency relative to holistic scoring. These findings indicate that reliable AI-assisted scoring depends primarily on clear, well-structured rubrics, while prompting format plays a secondary role and temperature has relatively limited impact. More broadly, the study provides transferable design recommendations for implementing reliable LLM-assisted scoring in STEM contexts through skill-based rubrics and controlled LLM settings.