Can MLLMs Read Students' Minds? Unpacking Multimodal Error Analysis in Handwritten Math
arXiv cs.AI / 3/27/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that evaluating handwritten math scratchwork for personalized feedback is difficult for existing educational NLP and that current multimodal LLMs usually focus on answering correctly rather than diagnosing errors.
- It introduces ScratchMath, a new benchmark with 1,720 annotated samples of handwritten Chinese primary/middle-school math, designed for error diagnosis through two tasks: Error Cause Explanation (ECE) and Error Cause Classification (ECC) across seven error types.
- The dataset is created using a multi-stage human–machine collaborative annotation process with expert labeling, review, and verification to ensure annotation quality.
- Experiments evaluating 16 leading MLLMs show sizable performance gaps versus human experts, with weaknesses particularly in visual recognition and logical reasoning, while proprietary models generally outperform open-source models.
- The authors make the evaluation data and frameworks publicly available to support further research into multimodal error analysis for education.
広告


