AI Navigate

$P^2$GNN:GNN性能向上のための二つのプロトタイプセット

arXiv cs.LG / 2026/3/11

Ideas & Deep AnalysisModels & Research

要点

  • $P^2$GNNは、メッセージパッシンググラフニューラルネットワーク(MP-GNN)の性能を向上させるための新しいプラグアンドプレイ技術であり、二つのプロトタイプセットを導入してグローバルコンテキストを強化し、局所的近隣のノイズを除去します。
  • 標準的なMP-GNNが、主に局所コンテキストに依存し、強いホモフィリーを仮定するため、ノイズの多い局所近隣に対する課題を抱えているという主要な制限に対処します。
  • 本手法はプロトタイプを普遍的にアクセス可能な近隣ノードとして扱いグローバル情報を豊かにし、メッセージをクラスタ化されたプロトタイプに整列させることでノイズ除去効果をもたらし、様々なGNNアーキテクチャの性能を向上させます。
  • 専有のeコマースデータセットおよびオープンソースデータセットを含む18のデータセットでの大規模な実験により、$P^2$GNNがノード推薦と分類タスクにおいて生産モデルを上回る優れた性能を示すことが証明されました。
  • 定性的分析により、グローバルコンテキストの導入および局所ノイズの軽減がGNNの効果を大幅に高めることが確認され、$P^2$GNNがグラフ表現学習における先進的アプローチとして確立されます。

Computer Science > Machine Learning

arXiv:2603.09195 (cs)
[Submitted on 10 Mar 2026]

Title:$P^2$GNN: Two Prototype Sets to boost GNN Performance

View a PDF of the paper titled $P^2$GNN: Two Prototype Sets to boost GNN Performance, by Arihant Jain and 3 other authors
View PDF HTML (experimental)
Abstract:Message Passing Graph Neural Networks (MP-GNNs) have garnered attention for addressing various industry challenges, such as user recommendation and fraud detection. However, they face two major hurdles: (1) heavy reliance on local context, often lacking information about the global context or graph-level features, and (2) assumption of strong homophily among connected nodes, struggling with noisy local neighborhoods. To tackle these, we introduce $P^2$GNN, a plug-and-play technique leveraging prototypes to optimize message passing, enhancing the performance of the base GNN model. Our approach views the prototypes in two ways: (1) as universally accessible neighbors for all nodes, enriching global context, and (2) aligning messages to clustered prototypes, offering a denoising effect. We demonstrate the extensibility of our proposed method to all message-passing GNNs and conduct extensive experiments across 18 datasets, including proprietary e-commerce datasets and open-source datasets, on node recommendation and node classification tasks. Results show that $P^2$GNN outperforms production models in e-commerce and achieves the top average rank on open-source datasets, establishing it as a leading approach. Qualitative analysis supports the value of global context and noise mitigation in the local neighborhood in enhancing performance.
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2603.09195 [cs.LG]
  (or arXiv:2603.09195v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09195
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Arihant Jain [view email]
[v1] Tue, 10 Mar 2026 05:10:02 UTC (740 KB)
Full-text links:

Access Paper:

Current browse context:
cs.LG
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.