CODE-GEN: A Human-in-the-Loop RAG-Based Agentic AI System for Multiple-Choice Question Generation

arXiv cs.AI / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • CODE-GEN is introduced as a human-in-the-loop, retrieval-augmented (RAG) agentic AI system for generating context-aligned multiple-choice coding comprehension questions tied to course learning objectives.
  • The system uses two cooperating agents— a Generator to draft questions and a Validator to independently score content quality across seven pedagogical dimensions, supported by specialized tools for computational accuracy and code verification.
  • An evaluation with six subject-matter experts reviewed 288 AI-generated questions, producing 2,016 human-AI rating comparisons and additional qualitative feedback.
  • Results show strong performance, with human-validated success rates of 79.9%–98.6% across most dimensions that align with explicit criteria and computational checks.
  • The study finds that human expertise remains critical for harder pedagogical tasks such as crafting meaningfully plausible distractors and writing feedback that deepens understanding.

Abstract

We present CODE-GEN, a human-in-the-Loop, retrieval-augmented generation (RAG)-based agentic AI system for generating context-aligned multiple-choice questions to develop student code reasoning and comprehension abilities. CODE-GEN employs an agentic AI architecture in which a Generator agent produces multiple-choice coding comprehension questions aligned with course-specific learning objectives, while a Validator agent independently assesses content quality across seven pedagogical dimensions. Both agents are augmented with specialized tools that enhance computational accuracy and verify code outputs. To evaluate the effectiveness of CODE-GEN, we conducted an evaluation study involving six human subject-matter experts (SMEs) who judged 288 AI-generated questions. The SMEs produced a total of 2,016 human-AI rating pairs, indicating agreement or disagreement with the assessments of Validator, along with 131 instances of qualitative feedback. Analyses of SME judgments show strong system performance, with human-validated success rates ranging from 79.9% to 98.6% across the seven pedagogical dimensions. The analysis of qualitative feedback reveals that CODE-GEN achieves high reliability on dimensions well suited to computational verification and explicit criteria matching, including question clarity, code validity, concept alignment, and correct answer validity. In contrast, human expertise remains essential for dimensions requiring deeper instructional judgment, such as designing pedagogically meaningful distractors and providing high-quality feedback that reinforces understanding. These findings inform the strategic allocation of human and AI effort in AI-assisted educational content generation.

CODE-GEN: A Human-in-the-Loop RAG-Based Agentic AI System for Multiple-Choice Question Generation | AI Navigate