From Flat to Structural: Enhancing Automated Short Answer Grading with GraphRAG

arXiv cs.AI / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • LLMs struggle with hallucinations and strict rubric adherence in automated short answer grading (ASAG), and standard flat RAG retrieval misses important structural dependencies and multi-hop reasoning.
  • The GraphRAG framework introduces a structured knowledge graph to explicitly model concept dependencies, enabling more coherent and comprehensive evidence retrieval.
  • The approach uses a dual-phase pipeline with Microsoft GraphRAG for high-fidelity graph construction and the HippoRAG neurosymbolic algorithm to perform associative graph traversals.
  • Experimental results on an NGSS dataset show GraphRAG significantly outperforms standard RAG baselines across metrics, with notable gains in evaluating Science and Engineering Practices (SEP).

Abstract

Automated short answer grading (ASAG) is critical for scaling educational assessment, yet large language models (LLMs) often struggle with hallucinations and strict rubric adherence due to their reliance on generalized pre-training. While Rretrieval-Augmented Generation (RAG) mitigates these issues, standard "flat" vector retrieval mechanisms treat knowledge as isolated fragments, failing to capture the structural relationships and multi-hop reasoning essential for complex educational content. To address this limitation, we introduce a Graph Retrieval-Augmented Generation (GraphRAG) framework that organizes reference materials into a structured knowledge graph to explicitly model dependencies between concepts. Our methodology employs a dual-phase pipeline: utilizing Microsoft GraphRAG for high-fidelity graph construction and the HippoRAG neurosymbolic algorithm to execute associative graph traversals, thereby retrieving comprehensive, connected subgraphs of evidence. Experimental evaluations on a Next Generation Science Standards (NGSS) dataset demonstrate that this structural approach significantly outperforms standard RAG baselines across all metrics. Notably, the HippoRAG implementation achieved substantial improvements in evaluating Science and Engineering Practices (SEP), confirming the superiority of structural retrieval in verifying the logical reasoning chains required for higher-order academic assessment.