LLM as Clinical Graph Structure Refiner: Enhancing Representation Learning in EEG Seizure Diagnosis

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how noisy EEG signals make graph construction for seizure detection unreliable, often producing redundant or irrelevant edges that degrade downstream performance.
  • It proposes using large language models (LLMs) to refine graph edges, first showing that LLM-driven removal of redundant connections improves seizure detection accuracy and graph interpretability.
  • The framework is two-stage: an initial graph is generated using a Transformer-based edge predictor plus an MLP that scores candidate edges, followed by thresholding to form the starting adjacency.
  • The LLM then serves as an edge-set refiner, using both textual and statistical features of node pairs to decide which connections to keep.
  • Experiments on the TUSZ dataset indicate that the LLM-refined graph learning improves performance while producing cleaner, more meaningful graph representations.

Abstract

Electroencephalogram (EEG) signals are vital for automated seizure detection, but their inherent noise makes robust representation learning challenging. Existing graph construction methods, whether correlation-based or learning-based, often generate redundant or irrelevant edges due to the noisy nature of EEG data. This significantly impairs the quality of graph representation and limits downstream task performance. Motivated by the remarkable reasoning and contextual understanding capabilities of large language models (LLMs), we explore the idea of using LLMs as graph edge refiners. Specifically, we propose a two-stage framework: we first verify that LLM-based edge refinement can effectively identify and remove redundant connections, leading to significant improvements in seizure detection accuracy and more meaningful graph structures. Building on this insight, we further develop a robust solution where the initial graph is constructed using a Transformer-based edge predictor and multilayer perceptron, assigning probability scores to potential edges and applying a threshold to determine their existence. The LLM then acts as an edge set refiner, making informed decisions based on both textual and statistical features of node pairs to validate the remaining connections. Extensive experiments on TUSZ dataset demonstrate that our LLM-refined graph learning framework not only enhances task performance but also yields cleaner and more interpretable graph representations.