Filters








4 Hits in 1.7 sec

BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing [article]

Tianfeng Liu
2021 arXiv   pre-print
Extensive experiments on various GNN models and large graph datasets show that BGL significantly outperforms existing GNN training systems by 20.68x on average.  ...  Nonetheless, existing systems are inefficient to train large graphs with billions of nodes and edges with GPUs.  ...  We compare the feature retrieving time of one mini-batch using different GPUs on Ogbn-papers. PaGraph cannot scale to such large graphs.  ... 
arXiv:2112.08541v1 fatcat:kzel63n3ircqdpcuie2d4jd7y4

SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures [article]

Yunjae Lee, Jinha Chung, Minsoo Rhu
2022 arXiv   pre-print
In this work, we first conduct a detailed characterization on a state-of-the-art, large-scale GNN training algorithm, GraphSAGE.  ...  Based on the characterization, we then explore the feasibility of utilizing capacity-optimized NVM SSDs for storing memory-hungry GNN data, which enables large-scale GNN training beyond the limits of main  ...  ACKNOWLEDGEMENT This research is partly supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) (NRF-2021R1A2C2091753), the Super Computer Development Leading  ... 
arXiv:2205.04711v1 fatcat:nvgvsja7r5c4zclfx6dvzx526q

Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis [article]

Maciej Besta, Torsten Hoefler
2022 arXiv   pre-print
However, both inference and training of GNNs are complex, and they uniquely combine the features of irregular graph processing with dense and regular computations.  ...  computations.  ...  CONCLUSION Graph neural networks (GNNs) are one of the most important and fastest growing parts of machine learning.  ... 
arXiv:2205.09702v2 fatcat:qonqh7v7j5b3vgnrgra7wbvr44

OpenGraphGym-MG: Using Reinforcement Learning to Solve Large Graph Optimization Problems on MultiGPU Systems [article]

Weijian Zheng, Dali Wang, Fengguang Song
2021 arXiv   pre-print
Large scale graph optimization problems arise in many fields.  ...  Good scalability in both RL training and inference is achieved: as the number of GPUs increases from one to six, the time of a single step of RL training and a single step of RL inference on large graphs  ...  PaGraph: REFERENCES Scaling GNN Training on Large Graphs via Computation-Aware Caching.  ... 
arXiv:2105.08764v2 fatcat:r6ejpsl4srdizgh6vld5cffnum