OMPI

Hirotaka Ogawa, Satoshi Matsuoka
1996 Proceedings of the 1996 ACM/IEEE conference on Supercomputing (CDROM) - Supercomputing '96  
MPI is gaining acceptance as a standard for message-passing in high-performance computing, due to its powerful and flexible support of various communication styles. However, the complexity of its API poses significant software overhead, and as a result, applicability of MPI has been restricted to rather regular, coarse-grained computations. Our OMPI (Optimizing MPI) system removes much of the excess overhead by employing partial evaluation techniques, which exploit static information of MPI
more » ... s. Because partial evaluation alone is insufficient, we also utilize template functions for further optimization. To validate the effectiveness for our OMPI system, we performed baseline as well as more extensive benchmarks on a set of application cores with different communication characteristics, on the 64-node Fujitsu AP1000 MPP. Benchmarks show that OMPI improves execution efficiency by as much as factor of two for communicationintensive application core with minimal code increase. It also performs significantly better than previous dynamic optimization technique.
doi:10.1145/369028.369106 fatcat:2aqqjayc7ze6palivzspbpbdcu