Graph-Aware Text-Only Backdoor Poisoning for Text-Attributed Graphs
arXiv cs.LG / 3/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies a realistic threat model for graph learning systems with text-attributed nodes, where an attacker poisons only node text without altering the graph structure.
- It introduces TAGBD, which identifies highly influenceable training nodes, uses a shadow graph model to generate natural-looking trigger text, and injects the trigger by either replacing node text or appending a short phrase.
- Experiments on three benchmark datasets show TAGBD is highly effective, can transfer across different graph models, and stays strong even when common defenses are applied.
- The findings highlight that text-only manipulation is a practical backdoor channel in text-attributed graphs, motivating defenses that validate both node content and graph connections.
Related Articles
How AI is Transforming Dynamics 365 Business Central
Dev.to
Algorithmic Gaslighting: A Formal Legal Template to Fight AI Safety Pivots That Cause Psychological Harm
Reddit r/artificial
Do I need different approaches for different types of business information errors?
Dev.to
ShieldCortex: What We Learned Protecting AI Agent Memory
Dev.to
WordPress Theme Customization Without Code: The AI Revolution
Dev.to