Are LLM-Enhanced Graph Neural Networks Robust against Poisoning Attacks?
arXiv cs.LG / 3/30/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether LLM-enhanced Graph Neural Networks are robust to poisoning attacks that manipulate both graph structure and node textual attributes during training.
- It proposes a systematic robustness evaluation framework that tests 24 victim models built from combinations of eight LLM/LM-based feature enhancers and three GNN backbones.
- The evaluation spans six structural poisoning attacks (targeted and non-targeted) and three textual poisoning attacks at character, word, and sentence levels, across four datasets selected to avoid LLM pretraining ground-truth leakage.
- Experimental results show LLM-enhanced GNNs maintain higher accuracy and lower Relative Drop in Accuracy than a shallow embedding baseline under many attack settings.
- The authors attribute improved robustness to how node representations encode structural and label information, and they also introduce future offensive/defensive directions plus a combined attack and graph purification defense, releasing source code for the framework.
Related Articles

Black Hat Asia
AI Business

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Simon Willison's Blog
Beyond the Chatbot: Engineering Multi-Agent Ecosystems in 2026
Dev.to

I missed the "fun" part in software development
Dev.to

The Billion Dollar Tax on AI Agents
Dev.to