SciEval: A Benchmark for Automatic Evaluation of K-12 Science Instructional Materials

arXiv cs.AI / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SciEval, a benchmark dataset for automatically evaluating K-12 science instructional materials, targeting the scalability problem of expert-led reviews.
  • It formulates Automatic Instructional Materials Evaluation (AIME) as a generative AI task that predicts rubric-based scores and evidence aligned with educator-designed criteria (EQuIP), based on an expert-annotated dataset of 273 lesson-level materials.
  • Testing mainstream LLMs (GPT, Gemini, Llama, Qwen) on SciEval shows they do not deliver strong performance out of the box, indicating reliability gaps on this education-domain evaluation.
  • Fine-tuning Qwen3 on SciEval yields up to 11% performance gains on a held-out test set, suggesting that domain-specific training is key to making LLM-based automated evaluation viable for instructional materials.

Abstract

The need to evaluate instructional materials for K-12 science education has become increasingly important, as more educators use generative AI to create instructional materials. However, the review of instructional materials is time-consuming, expertise-intensive, and difficult to scale, motivating interest in automated evaluation approaches. While large language models (LLMs) have shown strong performance on general evaluation tasks, their performance and reliability on instructional materials remain unclear. To address this gap, we formulate Automatic Instructional Materials Evaluation (AIME) as a generative AI task that predicts scores and evidence using the rubric designed by the educator. We create a benchmark dataset and develop baseline models for AIME. First, we curate the first AIME dataset, SciEval, consisting of instructional materials annotated with pedagogy-aligned evaluation scores and evidence-based rationales. Expert annotations achieve high inter-rater reliability, resulting in a dataset of 273 lesson-level instructional materials evaluated across 13 criteria (N=3549) using the EQuIP rubric. Second, we test mainstream LLMs (GPT, Gemini, Llama, and Qwen) on SciEval and find that none achieve strong performance. Then we fine-tune Qwen3 on SciEval. Results on a held-out test set show that domain-aligned fine-tuning can achieve up to 11 percent performance gains, highlighting the importance of domain-specific fine-tuning for AIME and facilitating the use of LLMs in other educational tasks.