Evaluating Large Language Models on Computer Science University Exams in Data Structures
arXiv cs.CL / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper presents a comprehensive evaluation of large language models (LLMs) on computer science data structures exam questions.
- It introduces a new benchmark dataset built from Tel Aviv University (TAU) exam questions to test LLM performance on closed and multiple-choice formats.
- The study evaluates OpenAI’s GPT-4o and Anthropic’s Claude 3.5, along with smaller models (Mathstral 7B and LLaMA 3 8B), using the TAU exam benchmark.
- The results are intended to shed light on how well today’s LLMs perform on CS education assessments and question-answering tasks.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to