Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math

arXiv cs.AI / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that evaluating handwritten math scratchwork for personalized feedback is difficult for existing educational NLP and that current multimodal LLMs usually focus on answering correctly rather than diagnosing errors.
  • It introduces ScratchMath, a new benchmark with 1,720 annotated samples of handwritten Chinese primary/middle-school math, designed for error diagnosis through two tasks: Error Cause Explanation (ECE) and Error Cause Classification (ECC) across seven error types.
  • The dataset is created using a multi-stage human–machine collaborative annotation process with expert labeling, review, and verification to ensure annotation quality.
  • Experiments evaluating 16 leading MLLMs show sizable performance gaps versus human experts, with weaknesses particularly in visual recognition and logical reasoning, while proprietary models generally outperform open-source models.
  • The authors make the evaluation data and frameworks publicly available to support further research into multimodal error analysis for education.

Abstract

Assessing student handwritten scratchwork is crucial for personalized educational feedback but presents unique challenges due to diverse handwriting, complex layouts, and varied problem-solving approaches. Existing educational NLP primarily focuses on textual responses and neglects the complexity and multimodality inherent in authentic handwritten scratchwork. Current multimodal large language models (MLLMs) excel at visual reasoning but typically adopt an "examinee perspective", prioritizing generating correct answers rather than diagnosing student errors. To bridge these gaps, we introduce ScratchMath, a novel benchmark specifically designed for explaining and classifying errors in authentic handwritten mathematics scratchwork. Our dataset comprises 1,720 mathematics samples from Chinese primary and middle school students, supporting two key tasks: Error Cause Explanation (ECE) and Error Cause Classification (ECC), with seven defined error types. The dataset is meticulously annotated through rigorous human-machine collaborative approaches involving multiple stages of expert labeling, review, and verification. We systematically evaluate 16 leading MLLMs on ScratchMath, revealing significant performance gaps relative to human experts, especially in visual recognition and logical reasoning. Proprietary models notably outperform open-source models, with large reasoning models showing strong potential for error explanation. All evaluation data and frameworks are publicly available to facilitate further research.
広告