Filters








164 Hits in 3.7 sec

Performance Comparison of MPI Implementations over InfiniBand, Myrinet and Quadrics

Jiuxing Liu, Balasubramanian Chandrasekaran, Jiesheng Wu, Weihang Jiang, Sushmitha Kini, Weikuan Yu, Darius Buntinas, Peter Wyckoff, D K. Panda
2003 Proceedings of the 2003 ACM/IEEE conference on Supercomputing - SC '03  
In this paper, we present a comprehensive performance comparison of MPI implementations over Infini-Band, Myrinet and Quadrics. Our performance evaluation consists of two major parts.  ...  For our 8-node cluster, InfiniBand can offer significant performance improvements for a number of applications compared with Myrinet and Quadrics when using the PCI-X bus.  ...  Thanks to Fabrizio Petrini from LANL and David Addison from Quadrics for having many discussions related to Quadrics.  ... 
doi:10.1145/1048935.1050208 dblp:conf/sc/LiuCWJKYBWP03 fatcat:jvcinutju5drdlt2b7bwshbyna

Poster reception---Optimized collectives for PGAS languages with one-sided communication

Dan Bonachea, Paul Hargrove, Rajesh Nishtala, Michael Welcome, Katherine Yelick
2006 Proceedings of the 2006 ACM/IEEE conference on Supercomputing - SC '06  
MPI Send/Recv GASNet put+ack Raw Portals NAS FT Performance Comparison 0 100 200 300 400 500 600 700 800 900 1000 1100 Myrinet Pentium4 64 InfiniBand Opteron 256  ...  Across Tree Geometries -ary Tree 1 kByte All-To-All Performance Network / Processor / Node Count Time (microseconds) GASNet MPI Quadrics (Elan3) / Alpha / 64 All-To-All Performance GASNet MPI 16 Byte  ... 
doi:10.1145/1188455.1188604 dblp:conf/sc/BonacheaHNWY06 fatcat:s7pvxh7sgve5rmis3dg6353r2u

10-Gigabit iWARP Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G

Mohammad J. Rashti, Ahmad Afsahi
2007 2007 IEEE International Parallel and Distributed Processing Symposium  
At the MPI level, iWARP performs better than InfiniBand in queue usage and buffer re-use.  ...  In this paper we assess the potential of such an interconnect for high-performance computing by comparing its performance with two leading cluster interconnects, InfiniBand and Myrinet-10G.  ...  We intend to extend our study to include uDAPL, sockets, and applications. We would also like to enhance the NetEffect MPI implementation.  ... 
doi:10.1109/ipdps.2007.370480 dblp:conf/ipps/RashtiA07 fatcat:fznac5yc3bchvn5on7tppbzofa

The impact of MPI queue usage on message latency

K.D. Underwood, R. Brightwell
2004 International Conference on Parallel Processing, 2004. ICPP 2004.  
These benchmarks are used to evaluate modern high-performance networks, including Quadrics, InfiniBand, and Myrinet.  ...  For example, traditional MPI latency benchmarks time a ping-pong communication with one send and one receive on each of two nodes.  ...  Specifically, InfiniBand and Myrinet perform as much as an order of magnitude better than Quadrics when MPI queues are lengthy.  ... 
doi:10.1109/icpp.2004.1327915 dblp:conf/icpp/UnderwoodB04 fatcat:bolzea6b65gn3acuthtxyopl3e

Benefits of high speed interconnects to cluster file systems: a case study with Lustre

Weikuan Yu, R. Noronha, Shuang Liang, D.K. Panda
2006 Proceedings 20th IEEE International Parallel & Distributed Processing Symposium  
The performance of Lustre over Quadrics is comparable to that of Lustre over InfiniBand with the platforms we have.  ...  In this paper, we perform an evaluation of a popular cluster file system, Lustre, over two of the leading high speed cluster interconnects: InfiniBand and Quadrics.  ...  It is to be noted that there is also a prototype implementation of Lustre over Myrinet/GM [20] .  ... 
doi:10.1109/ipdps.2006.1639564 dblp:conf/ipps/YuNLP06 fatcat:cttwtymv7fgwfm66gmok4m2or4

High performance support of parallel virtual file system (PVFS2) over Quadrics

Weikuan Yu, Shuang Liang, Dhabaleswar K. Panda
2005 Proceedings of the 19th annual international conference on Supercomputing - ICS '05  
We design and implement a Quadrics-capable version of a parallel file system (PVFS2).  ...  With four IO server nodes, our implementation improves PVFS2 aggregated read bandwidth by up to 140% compared to PVFS2 over TCP on top of Quadrics IP implementation.  ...  Furthermore, We also would like to thank Drs Daniel Kidger and David Addison from Quadrics, Inc for their valuable technical support.  ... 
doi:10.1145/1088149.1088192 dblp:conf/ics/YuLP05 fatcat:mpv3r6hmjzd4ben2zioklh2zs4

Microbenchmark performance comparison of high-speed cluster interconnects

Jiuxing Liu, B. Chandrasekaran, Weikuan Yu, Jiesheng Wu, D. Buntinas, S. Kini, D.K. Panda, P. Wyckoff
2004 IEEE Micro  
Acknowledgments This research is supported in part by Sandia National Laboratory contract number 30505; Department of Energy grant number DE-FC02-01ER25506; and National Science Foundation grants number  ...  EIA-9986052, CCR-0204429, and CCR-0311542.  ...  Therefore, the application-level performance reflects not only the capability of the network interconnects, but also the quality of the MPI implementations and the design choices of the MPI implementers  ... 
doi:10.1109/mm.2004.1268994 fatcat:2tmrx5boybgtxkvbhlmvweey3y

Open MPI: A High Performance, Flexible Implementation of MPI Point-to-Point Communications

Richard L. Graham, Brian W. Barrett, Galen M. Shipman, Timothy S. Woodall, George Bosilca
2007 Parallel Processing Letters  
This includes comparisons with other MPI implementations using the OpenIB, MX, and GM communications libraries.  ...  This paper describes the three point-to-point communications protocols currently supported in the Open MPI implementation, supported with performance data.  ...  Project support was provided through ASCI/PSE, the Los Alamos Computer Science Institute, and the Center for Information Technology Research (CITR) of the University of Tennessee.  ... 
doi:10.1142/s0129626407002880 fatcat:veuwtdqeorhhbivxcbym2zjkva

An Initial Analysis of the Impact of Overlap and Independent Progress for MPI [chapter]

Ron Brightwell, Keith D. Underwood, Rolf Riesen
2004 Lecture Notes in Computer Science  
In this paper, we compare the performance of several application benchmarks using an MPI implementation that takes advantage of a programmable NIC to implement MPI semantics with an implementation that  ...  Two important features of an MPI implementation are independent progress and the ability to overlap computation with communication.  ...  For example, there have been several extensive comparisons of high-performance commodity interconnects, such as Myrinet [1] , Quadrics [2] , and InfiniBand [3] .  ... 
doi:10.1007/978-3-540-30218-6_51 fatcat:tpddlrvmffcyxdk3ntzinq3gia

A feasibility analysis of power-awareness and energy minimization in modern interconnects for high-performance computing

Reza Zamani, Ahmad Afsahi, Ying Qian, Carl Hamacher
2007 2007 IEEE International Conference on Cluster Computing  
The MPI implementation is the Quadrics MPI, version MPI.1.24-49.intel81.  ...  They do not represent high-performance clusters with their serverclass nodes and modern interconnects such as Myrinet [29] , Quadrics [4] and InfiniBand [19] .  ... 
doi:10.1109/clustr.2007.4629224 dblp:conf/cluster/ZamaniAQH07 fatcat:bqfxcvt3tjdxvkx3w34c6eenum

Optimizing bandwidth limited problems using one-sided communication and overlap

C. Bell, D. Bonachea, R. Nishtala, K. Yelick
2006 Proceedings 20th IEEE International Parallel & Distributed Processing Symposium  
Our best one-sided implementations show an average improvement of 15% over our best two-sided implementations.  ...  We demonstrate this benefit through communication microbenchmarks and a case-study that compares UPC and MPI implementations of the NAS Fourier Transform (FT) benchmark.  ...  ., Myrinet and InfiniBand).  ... 
doi:10.1109/ipdps.2006.1639320 dblp:conf/ipps/BellBNY06 fatcat:33xs5aiegvgezkxiw77wn2ob6u

NewMadeleine: An efficient support for high-performance networks in MPICH2

Guillaume Mercier, Francois Trahay, Elisabeth Brunet, Darius Buntinas
2009 2009 IEEE International Symposium on Parallel & Distributed Processing  
This paper describes how the NewMadeleine communication library has been integrated within the MPICH2 MPI implementation and the benefits brought.  ...  By doing so, we allow NewMadeleine to fully deliver its performance to an MPI application.  ...  Dept. of Energy, under Contract DE-AC02-06CH11357 and by the ACI GRID.  ... 
doi:10.1109/ipdps.2009.5161003 dblp:conf/ipps/MercierTBB09 fatcat:3rwmiw6lhrazvlqdbpnvinhs7y

MPI over uDAPL: Can High Performance and Portability Exist Across Architectures?

Lei Chai, R. Noronha, D.K. Panda
2006 Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID'06)  
It also delivers the same good performance as MPI implemented over native APIs of the underlying interconnect.  ...  Experimental results on Solaris show that the multi-stream design can improve bandwidth over InfiniBand by 30%, and improve the application performance by up to 11%.  ...  MVAPICH [4] is a high-performance MPI-1 implementation over InfiniBand. It is an implementation of MPICH [17] ADI2 layer.  ... 
doi:10.1109/ccgrid.2006.70 dblp:conf/ccgrid/ChaiNP06 fatcat:kkoovr2mzzgnlcmingcsycluhi

Network Fault Tolerance in LA-MPI [chapter]

Rob T. Aulwes, David J. Daniel, Nehal N. Desai, Richard L. Graham, L. Dean Risinger, Mitchel W. Sukalski, Mark A. Taylor
2003 Lecture Notes in Computer Science  
Finally we include some performance numbers for Quadrics Elan, Myrinet GM and UDP network data paths.  ...  LA-MPI is a high-performance, network-fault-tolerant implementation of MPI designed for terascale clusters that are inherently unreliable due to their very large number of system components and to trade-offs  ...  LA-MPI supports job spawning and control with Platform LSF, Quadrics RMS (Tru64 only), Bproc [14], and standard BSD rsh.  ... 
doi:10.1007/978-3-540-39924-7_48 fatcat:4genxqxzdbdwtis6ixxllty3wu

A Cost-Effective, High Bandwidth Server I/O network Architecture for Cluster Systems

Hsing-bung Chen, Gary Grider, Parks Fields
2007 2007 IEEE International Parallel and Distributed Processing Symposium  
Concurrent MPI-I/O performance testing results and deployment cost comparison demonstrate that the PaScal server I/O network architecture can outperform the FESIO network architecture in many categories  ...  : cost-effectiveness, scalability, and manageability and ease of large-scale I/O network.  ...  ACKNOWLEDGEMENTS We are thankful to co-workers from LANL's HPC-3, CTN-5, and HPC-5 groups for their support on the design and implementation of PaScal Server I/O Infrastructure, Benjamin McCelland (LANL's  ... 
doi:10.1109/ipdps.2007.370221 dblp:conf/ipps/ChenGF07 fatcat:4sktmbchvjacbl4iznptvrvpxe
« Previous Showing results 1 — 15 out of 164 results