LogicPoison: Logical Attacks on Graph Retrieval-Augmented Generation
arXiv cs.CL / 4/6/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- GraphRAG improves LLM reasoning by grounding answers in knowledge graphs, and it is often viewed as resistant to classic RAG attacks like text poisoning and prompt injection.
- The paper argues that GraphRAG’s security actually depends heavily on the graph’s topological/logical integrity, which attackers can corrupt without changing surface-level text.
- It introduces LogicPoison, an attack framework that uses type-preserving entity swapping to disrupt both global connectivity (logic hubs) and query-specific multi-hop reasoning paths (reasoning bridges).
- Experiments across multiple benchmarks show LogicPoison can bypass GraphRAG defenses with strong stealth, substantially degrading performance compared with state-of-the-art baselines.
- The authors provide an open-source implementation of LogicPoison, enabling reproducibility and further security assessment of GraphRAG systems.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

How Bash Command Safety Analysis Works in AI Systems
Dev.to

How I Built an AI Agent That Earns USDC While I Sleep — A Complete Guide
Dev.to

How to Get Better Output from AI Tools (Without Burning Time and Tokens)
Dev.to

How I Added LangChain4j Without Letting It Take Over My Spring Boot App
Dev.to