Parallel programming with message passing and directives

S.W. Bova, C.P. Breshears, H. Gabb, B. Kuhn, B. Magro, R. Eigenmann, G. Gaertner, S. Salvini, H. Scott
2001 Computing in science & engineering (Print)  
P arallel application developers today face the problem of how to integrate the dominant parallel processing models into one source code. Most high-performance systems use the Distributed Memory Parallel (DMP) and Shared Memory Parallel (SMP; also known as Symmetric MultiProcessor) models, and many applications can benefit from support for multiple parallelism modes. Here we show how to integrate both modes into high-performance parallel applications. These applications have three primary
more » ... • high speedup, scalable performance, and efficient system use; • similar behavior on a wide range of platforms and easy portability between platforms; and • low development time and uncomplicated maintenance. Most programmers use the dominant parallel programming languages for DMP and SMP: message-passing interface 1 (MPI; www.mpi-forum. org) and OpenMP 2,3 (www.openmp.org), respectively. Some applications we study here use PVM instead of MPI (see Table 1 ). This article illustrates good parallel software engineering techniques for managing the complexity of using both DMP and SMP parallelism. Applications The applications listed in Table 1 solve problems in hydrology, computational chemistry, general science, seismic processing, aeronautics, and computational physics. Emphasizing both I/O and computation, they apply several numerical methods including finite-element analysis, wave equation integration, linear algebra subroutines, fast Fourier transforms (FFTs), filters, and a variety of PDEs (Partial Differential Equations) and ODEs (Ordinary Differential Equations). The COMPUTING IN SCIENCE & ENGINEERING PARALLEL PROGRAMMING WITH MESSAGE PASSING AND DIRECTIVES The authors discuss methods for expressing and tuning the performance of parallel programs, using two programming models in the same program: distributed and shared memory. Such methods are important for anyone who uses these large machines for parallel programs as well as for those who study combinations of the two programming models.
doi:10.1109/5992.947105 fatcat:of7laitsjnhz7exipyqhf7ehtq