AI Navigate

Scalable Classification of Course Information Sheets Using Large Language Models: A Reusable Institutional Method for Academic Quality Assurance

arXiv cs.LG / 3/17/2026

💬 OpinionTools & Practical UsageModels & Research

Key Points

  • The study presents an end-to-end, LLM-based pipeline to audit course information sheets for GenAI risk at scale in higher education.
  • It implements a four-phase workflow—manual pilot sampling, iterative prompt engineering with multi-model comparison, a production scan of thousands of sheets with automated reporting, and a longitudinal re-scan to track changes.
  • A three-tier risk taxonomy (Clear risk, Potential risk, Low risk) and automated report distribution to teaching teams enable rapid, structured governance.
  • GPT-4o was selected for production due to superior handling of ambiguous cases, with 87% agreement with expert labels after iterative refinement.
  • Year 1 results showed 60.3% Clear risk, 15.2% Potential risk, and 24.5% Low risk, and Year 2 revealed substantial shifts in risk distributions with pronounced improvements in practice-oriented programs, with the method transferable to other audit domains and supporting responsible LLM deployment in higher education governance.

Abstract

Purpose: Higher education institutions face increasing pressure to audit course designs for generative AI (GenAI) integration. This paper presents an end-to-end method for using large language models (LLMs) to scan course information sheets at scale, identify where assessments may be vulnerable to student use of GenAI tools, validate system performance through iterative refinement, and operationalise results through direct stakeholder communication and effort. Method: We developed a four-phase pipeline: (0) manual pilot sampling, (1) iterative prompt engineering with multi-model comparison, (2) full production scan of 4,684 Bachelor and Master course information sheets (Academic Year 2024-2025) from the Vrije Universiteit Brussel (VUB) with automated report generation and email distribution to teaching teams (91.4% address-matched) using a three-tier risk taxonomy (Clear risk, Potential risk, Low risk), and (3) longitudinal re-scan of 4,675 sheets after the next catalogue release. Results: Five iterations of prompt refinement achieved 87% agreement with expert labels. GPT-4o was selected for production based on superior handling of ambiguous cases involving internships and practical components. The Year 1 scan classified 60.3% of courses as Clear risk, 15.2% as Potential risk, and 24.5% as Low risk. Year 2 comparison revealed substantial shifts in risk distributions, with improvements most pronounced in practice-oriented programmes. Implications: The method enables institutions to rapidly transform heterogeneous catalogue data into structured and actionable intelligence. The approach is transferable to other audit domains (sustainability, accessibility, pedagogical alignment) and provides a template for responsible LLM deployment in higher education governance.