GSR-GNN: Training Acceleration and Memory-Saving Framework of Deep GNNs on Circuit Graph

arXiv cs.LG / 3/31/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Overall, the work claims to make very deep GNNs (up to hundreds of layers) practical for large-scale EDA workloads.

Abstract

Graph Neural Networks (GNNs) show strong promise for circuit analysis, but scaling to modern large-scale circuit graphs is limited by GPU memory and training cost, especially for deep models. We revisit deep GNNs for circuit graphs and show that, when trainable, they significantly outperform shallow architectures, motivating an efficient, domain-specific training framework. We propose Grouped-Sparse-Reversible GNN (GSR-GNN), which enables training GNNs with up to hundreds of layers while reducing both compute and memory overhead. GSR-GNN integrates reversible residual modules with a group-wise sparse nonlinear operator that compresses node embeddings without sacrificing task-relevant information, and employs an optimized execution pipeline to eliminate fragmented activation storage and reduce data movement. On sampled circuit graphs, GSR-GNN achieves up to 87.2\% peak memory reduction and over 30\times training speedup with negligible degradation in correlation-based quality metrics, making deep GNNs practical for large-scale EDA workloads.