Breaking HPC Barriers with the 56GbE Cloud

Muhammad Atif, Rika Kobayashi, Benjamin J. Menadue, Ching Yeh Lin, Matthew Sanderson, Allan Williams
2016 Procedia Computer Science  
With the widespread adoption of cloud computing, high-performance computing (HPC) is no longer limited to organisations with the funds and manpower necessary to house and run a supercomputer. However, the performance of large-scale scientific applications in the cloud has in the past been constrained by latency and bandwidth. The main reasons for these constraints are the design decisions of cloud providers, primarily focusing on high-density applications such as web services and data hosting.
more » ... n this paper, we provide an overview of a high performance OpenStack cloud implementation at the National Computational Infrastructure (NCI). This cloud is targeted at high-performance scientific applications, and enables scientists to build their own clusters when their demands and software stacks conflict with traditional bare-metal HPC environments. In this paper, we present the architecture of our 56 GbE cloud and a preliminary set of HPC benchmark results against the more traditional cloud and native InfiniBand HPC environments. Three different network interconnects and configurations were tested as part of the Cloud deployment. These were 10G Ethernet, 56G Fat-tree Ethernet and native FDR Full Fat-tree InfiniBand (IB). In this paper, these three solutions are discussed from the viewpoint of on-demand HPC clusters focusing on bandwidth, latency and security. A detailed analysis of these metrics in the context of micro-benchmarks and scientific applications is presented, including the affects of using TCP and RDMA on scientific applications.
doi:10.1016/j.procs.2016.07.174 fatcat:7nym2pkzrndkpagvtx2ujkscje