A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2017; you can also visit the original URL.
The file type is application/pdf
.
Performance Models for CPU-GPU Data Transfers
2014
2014 14th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing
Many GPU applications perform data transfers to and from GPU memory at regular intervals. For example because the data does not fit into GPU memory or because of internode communication at the end of each time step. Overlapping GPU computation with CPU-GPU communication can reduce the costs of moving data. Several different techniques exist for transferring data to and from GPU memory and for overlapping those transfers with GPU computation. It is currently not known when to apply which method.
doi:10.1109/ccgrid.2014.16
dblp:conf/ccgrid/WerkhovenMSB14
fatcat:mqpnadib35evlkjbfgc6hgisui