Can LLMs Fool Graph Learning? Exploring Universal Adversarial Attacks on Text-Attributed Graphs

arXiv cs.AI / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper studies vulnerabilities in text-attributed graphs (TAGs), where combining topology with node text improves learning but introduces new adversarial risk surfaces.
  • It highlights the difficulty of creating universal attacks that work across different backbones (GNNs vs. PLMs) and across black-box access to many LLMs exposed only via APIs.
  • The authors propose BadGraph, an attack framework that elicits an LLM’s graph understanding and jointly perturbs node topology and textual semantics to craft cross-modal, generalizable attack “shortcuts.”
  • Experiments indicate BadGraph can produce universal and effective attacks against both GNN-based and LLM-based graph reasoners, reporting performance drops of up to 76.3%.
  • The work includes both theoretical and empirical analyses suggesting the attacks can be stealthy while remaining interpretable.

Abstract

Text-attributed graphs (TAGs) enhance graph learning by integrating rich textual semantics and topological context for each node. While boosting expressiveness, they also expose new vulnerabilities in graph learning through text-based adversarial surfaces. Recent advances leverage diverse backbones, such as graph neural networks (GNNs) and pre-trained language models (PLMs), to capture both structural and textual information in TAGs. This diversity raises a key question: How can we design universal adversarial attacks that generalize across architectures to assess the security of TAG models? The challenge arises from the stark contrast in how different backbones-GNNs and PLMs-perceive and encode graph patterns, coupled with the fact that many PLMs are only accessible via APIs, limiting attacks to black-box settings. To address this, we propose BadGraph, a novel attack framework that deeply elicits large language models (LLMs) understanding of general graph knowledge to jointly perturb both node topology and textual semantics. Specifically, we design a target influencer retrieval module that leverages graph priors to construct cross-modally aligned attack shortcuts, thereby enabling efficient LLM-based perturbation reasoning. Experiments show that BadGraph achieves universal and effective attacks across GNN- and LLM-based reasoners, with up to a 76.3% performance drop, while theoretical and empirical analyses confirm its stealthy yet interpretable nature.