Evaluating LLMs for Answering Student Questions in Introductory Programming Courses

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper evaluates whether LLMs can help educators respond to student questions in an introductory CS1 programming course in a way that supports learning rather than giving complete answers.
  • It introduces a reproducible benchmark built from 170 authentic student questions (from an LMS) with ground-truth educator responses written by subject-matter experts.
  • To score open-ended pedagogical responses, the authors develop and validate a custom “LLM-as-a-Judge” metric that better reflects pedagogical accuracy than standard text-matching methods.
  • Results indicate that certain models (e.g., Gemini 3 flash) can outperform the baseline quality of typical educator responses while aligning with expert pedagogical standards.
  • The authors recommend a “teacher-in-the-loop” workflow to reduce hallucination and improve alignment to course-specific context, and they propose a task-agnostic pre-deployment evaluation framework for educational LLM tools.

Abstract

The rapid emergence of Large Language Models (LLMs) presents both opportunities and challenges for programming education. While students increasingly use generative AI tools, direct access often hinders the learning process by providing complete solutions rather than pedagogical hints. Concurrently, educators face significant workload and scalability challenges when providing timely, personalized feedback. This study investigates the capabilities of LLMs to safely and effectively assist educators in answering student questions within a CS1 programming course. To achieve this, we established a rigorous, reproducible evaluation process by curating a benchmark dataset of 170 authentic student questions from a learning management system, paired with ground-truth responses authored by subject matter experts. Because traditional text-matching metrics are insufficient for evaluating open-ended educational responses, we developed and validated a custom LLM-as-a-Judge metric optimized for assessing pedagogical accuracy. Our findings demonstrate that models, such as Gemini 3 flash, can surpass the quality baseline of typical educator responses, achieving high alignment with expert pedagogical standards. To mitigate persistent risks like hallucination and ensure alignment with course-specific context, we advocate for a "teacher-in-the-loop" implementation. Finally, we abstract our methodology into a task-agnostic evaluation framework, advocating for a shift in the development of educational LLM tools from ad-hoc, post-deployment testing to a quantifiable, pre-deployment validation process.