A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
C-SAW: A Framework for Graph Sampling and Random Walk on GPUs
[article]
2020
arXiv
pre-print
In this paper, we propose, to the best of our knowledge, the first GPU-based framework for graph sampling/random walk. ...
First, our framework provides a generic API which allows users to implement a wide range of sampling and random walk algorithms with ease. ...
C-SAW: A Bias-Centric Sampling Framework C-SAW offloads sampling and random walk on GPUs with the goal of a simple and expressive API and a high performance framework. ...
arXiv:2009.09103v1
fatcat:m2j6vv7twnhjpimnme42jsf5aa
ThunderRW: An In-Memory Graph Random Walk Engine (Complete Version)
[article]
2021
arXiv
pre-print
As random walk is a powerful tool in many graph processing, mining and learning applications, this paper proposes an efficient in-memory random walk engine named ThunderRW. ...
Compared with existing parallel systems on improving the performance of a single graph operation, ThunderRW supports massive parallel random walks. ...
We consider C-SAW [46] , the state-of-the-art RW framework on GPUs, as well. ...
arXiv:2107.11983v1
fatcat:wr5aoycf4bb4zmj34p7srw7d6u
BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and Preprocessing
[article]
2021
arXiv
pre-print
The main bottlenecks are the process of preparing data for GPUs - subgraph sampling and feature retrieving. ...
By a co-design of caching policy and the order of sampling, we find a sweet spot of low overhead and high cache hit ratio. ...
In subgraph sampling, random hashing partitioning leads to extensive cross-partition communication.Some works try to improve graph sampling performance on GPUs, such as NextDoor [28] and C-SAW [41] . ...
arXiv:2112.08541v1
fatcat:kzel63n3ircqdpcuie2d4jd7y4
Scalable Graph Neural Network Training: The Case for Sampling
[article]
2021
arXiv
pre-print
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs. ...
Scalability is challenging with both approaches, but we make a case that research should focus on sample-based training since it is a more promising approach. ...
This work was partially supported by a Facebook Systems for Machine Learning Award and an AWS Cloud Credit for Research grant. ...
arXiv:2105.02315v1
fatcat:jke6ekujpfdsdjmx5d4avkjeky
Is Parallel Programming Hard, And, If So, What Can You Do About It? (Release v2021.12.22a)
[article]
2021
arXiv
pre-print
Nevertheless, you should think of the information in this book as a foundation on which to build, rather than as a completed cathedral. ...
painstakingly reinvent old wheels, enabling them to instead focus their energy and creativity on new frontiers. ...
The design for a bridge meant to allow people to walk across a small brook might be a simple as a single wooden plank. ...
arXiv:1701.00854v4
fatcat:pxiajyczebd5pm76htwnrczhm4