A Cross-graph Tuning-free GNN Prompting Framework
arXiv cs.LG / 4/2/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes a Cross-graph Tuning-free Prompting Framework (CTP) to improve GNN prompting by enabling generalization across graphs without task-specific retraining or parameter updates.
- CTP is designed to work for both homogeneous and heterogeneous graphs and can be deployed directly to unseen graphs without additional parameter tuning, positioning it as a plug-and-play inference engine.
- Experiments on few-shot prediction tasks show CTP delivers substantial accuracy improvements over state-of-the-art methods, including an average gain of 30.8% and a maximum gain of 54%.
- The work addresses a key limitation of prior graph prompting approaches—weak cross-graph generalization—which the authors argue undermines the practical promise of tuning-free prompting.
Related Articles
v5.5.0
Transformers(HuggingFace)Releases
Bonsai (PrismML's 1 bit version of Qwen3 8B 4B 1.7B) was not an aprils fools joke
Reddit r/LocalLLaMA
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Inference Engines - A visual deep dive into the layers of an LLM
Dev.to
Surprised by how capable Qwen3.5 9B is in agentic flows (CodeMode)
Reddit r/LocalLLaMA