Scalable Classification of Course Information Sheets Using Large Language Models: A Reusable Institutional Method for Academic Quality Assurance
arXiv cs.LG / 3/17/2026
💬 OpinionTools & Practical UsageModels & Research
Key Points
- The study presents an end-to-end, LLM-based pipeline to audit course information sheets for GenAI risk at scale in higher education.
- It implements a four-phase workflow—manual pilot sampling, iterative prompt engineering with multi-model comparison, a production scan of thousands of sheets with automated reporting, and a longitudinal re-scan to track changes.
- A three-tier risk taxonomy (Clear risk, Potential risk, Low risk) and automated report distribution to teaching teams enable rapid, structured governance.
- GPT-4o was selected for production due to superior handling of ambiguous cases, with 87% agreement with expert labels after iterative refinement.
- Year 1 results showed 60.3% Clear risk, 15.2% Potential risk, and 24.5% Low risk, and Year 2 revealed substantial shifts in risk distributions with pronounced improvements in practice-oriented programs, with the method transferable to other audit domains and supporting responsible LLM deployment in higher education governance.
Related Articles
Self-Refining Agents in Spec-Driven Development
Dev.to
How to Optimize Your LinkedIn Profile with AI in 2026 (Get Found by Recruiters)
Dev.to
Agentforce Builder: How to Build AI Agents in Salesforce
Dev.to
How AI Consulting Services Support Staff Development in Dubai
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to