Human-in-the-Loop Benchmarking of Heterogeneous LLMs for Automated Competency Assessment in Secondary Level Mathematics

arXiv cs.AI / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a Human-in-the-Loop benchmarking framework to evaluate how effectively heterogeneous LLMs can automate competency-based assessment in secondary-level mathematics, addressing the manual burden of competency mapping in CBE.
  • Using Nepal’s Grade 10 Optional Mathematics curriculum, the authors build a multi-dimensional rubric spanning four math topics and four cross-cutting competencies (Comprehension, Knowledge, Operational Fluency, and Behavior & Correlation).
  • In a multi-provider ensemble (two open-weight Llama-family models plus two Gemini frontier proprietary models), results reveal an “architecture-compatibility gap,” where instruction/rubric constraint compliance matters more than sheer model size.
  • Gemini MoE (Sparse MoE) models achieved only “Fair Agreement” with faculty ground truth (κ≈0.38), while the larger 70B Orion model showed “No Agreement” (κ≈-0.0261), indicating unreliable rubric-constrained grading.
  • The study concludes LLMs are not ready for autonomous certification but can provide valuable assistive evidence extraction as part of a Human-in-the-Loop workflow.

Abstract

As Competency-Based Education (CBE) is gaining traction around the world, the shift from marks-based assessment to qualitative competency mapping is a manual challenge for educators. This paper tackles the bottleneck issue by suggesting a "Human-in-the-Loop" benchmarking framework to assess the effectiveness of multiple LLMs in automating secondary-level mathematics assessment. Based on the Grade 10 Optional Mathematics curriculum in Nepal, we created a multi-dimensional rubric for four topics and four cross-cutting competencies: Comprehension, Knowledge, Operational Fluency, and Behavior and Correlation. The multi-provider ensemble, consisted of open-weight models -- Eagle (Llama 3.1-8B) and Orion (Llama 3.3-70B) -- and proprietary frontier models Nova (Gemini 2.5 Flash) and Lyra (Gemini 3 Pro), was benchmarked against a ground truth defined by two senior mathematics faculty members (kappa_w = 0.8652). The findings show a marked "Architecture-compatibility gap". Although the Gemini-based Mixture-of-Experts (Sparse MoE) models achieved "Fair Agreement" (kappa_w ~ 0.38), the larger Orion (70B) model exhibited "No Agreement" (kappa_w = -0.0261), suggesting that architectural compliance with instruction constraints outweighs the scale of raw parameters in rubric-constrained tasks. We conclude that while LLMs are not yet suitable for autonomous certification, they provide high-value assistive support for preliminary evidence extraction within a "Human-in-the-Loop" framework.