A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2014; you can also visit the original URL.
The file type is
Scalable MPI design over InfiniBand using eXtended Reliable Connection
2008 IEEE International Conference on Cluster Computing
A significant component of a high-performance cluster is the compute node interconnect. InfiniBand, is an interconnect of such systems that is enjoying wide success due to low latency (1.0-3.0µsec) and high bandwidth and other features. The Message Passing Interface (MPI) is the dominant programming model for parallel scientific applications. As a result, the MPI library and interconnect play a significant role in the scalability. These clusters continue to scale to ever-increasing levelsdoi:10.1109/clustr.2008.4663773 dblp:conf/cluster/KoopSP08 fatcat:ido75zsfnbc3pe5uqcqcpabcx4