9,459 Hits in 5.3 sec

Extending the MPI-2 Generalized Request Interface [chapter]

Robert Latham, William Gropp, Robert Ross, Rajeev Thakur
Lecture Notes in Computer Science  
The MPI-2 standard added a new feature to MPI called generalized requests.  ...  Generalized requests allow users to add new nonblocking operations to MPI while still using many pieces of MPI infrastructure such as request objects and the progress notification routines (MPI Test, MPI  ...  We can accommodate this requirement by extending the existing generalized request functions.  ... 
doi:10.1007/978-3-540-75416-9_33 fatcat:px5tlozkgfa5zbwrsumafzsnry

MPI-2: Extending the message-passing interface [chapter]

Al Geist, William Gropp, Steve Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, William Saphir, Tony Skjellum, Marc Snir
1996 Lecture Notes in Computer Science  
Other topics being discussed in MPI-2 include extending MPI-1's collective operations to intercommunicators and nonblocking operations (Section 5 ) , bindings for Cf + and Fortran 90 (Section 6), and interface  ...  General information on MPI is available at [I]. For the purposes of this paper, it will be useful to refer to the result of the initial MPI standardization effort as "MPI-1."  ...  Finally, the external interface definition in MPI-2 allows a generalization of the MPI-1 caching mechanism to allow caching on additional handles.  ... 
doi:10.1007/3-540-61626-8_16 fatcat:442nupqejrc7nlh2skslpw2si4

Spark-MPI: Approaching the Fifth Paradigm of Cognitive Applications [article]

Nikolay Malitsky, Ralph Castain, Matt Cowan
2018 arXiv   pre-print
The paper addresses the existing impedance mismatch between data-intensive and compute-intensive ecosystems by presenting the Spark-MPI approach based on the MPI Exascale Process Management Interface (  ...  The success of data-intensive projects subsequently triggered the next generation of machine learning approaches.  ...  Fig. 2 shows a general overview of the Spark-MPI integrated environment.  ... 
arXiv:1806.01110v1 fatcat:x6mmqowmkje7heitxvhaf5fsui

Callback-based completion notification using MPI Continuations

Joseph Schuchart, Philipp Samfass, Christoph Niethammer, José Gracia, George Bosilca
2021 Parallel Computing  
In this paper, we present an extension to the previously described interface that allows for finer control of the behavior of the MPI Continuations interface.  ...  We show that the interface, implemented inside Open MPI, enables low-latency, high-throughput completion notifications that outperform solutions implemented in the application space.  ...  In the current paper, we update and extend the description of this interface (Sections 2 and 3).  ... 
doi:10.1016/j.parco.2021.102793 fatcat:nn4rbtlw3fa2dglctm6loio6ie

Implementation and Evaluation of MPI Nonblocking Collective I/O

Sangmin Seo, Robert Latham, Junchao Zhang, Pavan Balaji
2015 2015 15th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing  
We then utilize a state machine and the extended generalized request interface to maintain the progress of nonblocking collective I/O operations.  ...  We present here initial work on the implementation of MPI nonblocking collective I/O operations in the MPICH MPI library.  ...  We gratefully acknowledge the computing resources provided on Blues, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.  ... 
doi:10.1109/ccgrid.2015.81 dblp:conf/ccgrid/SeoLZB15 fatcat:fkzlstp4xvbjxdp2srk3oiqg5m

Implementation and Usage of the PERUSE-Interface in Open MPI [chapter]

Rainer Keller, George Bosilca, Graham Fagg, Michael Resch, Jack J. Dongarra
2006 Lecture Notes in Computer Science  
We introduce the general design criteria of the interface implementation and analyze the overhead generated by this functionality.  ...  This paper describes the implementation, usage and experience with the MPI performance revealing extension interface (Peruse) into the Open MPI implementation.  ...  In the future, the authors would like to extend the Open MPI Peruse system with additional events yet to be defined in the current Peruse specification, e.g., collective routines and/or one-sided operations  ... 
doi:10.1007/11846802_48 fatcat:4qhbw5e5bzdb5ie36tvqi7d56u

A high-level C++ approach to manage local errors, asynchrony and faults in an MPI application [article]

Christian Engwer, Mirco Altenbernd, Nils-Arne Dreier, Dominik Göddeke
2018 arXiv   pre-print
In addition we present a dedicated implementation, which integrates seamlessly with MPI-ULFM, i.e. the most prominent proposal for extending MPI towards fault tolerance.  ...  In this paper we present an approach that adds extended exception propagation support to C++ MPI programs.  ...  Switching to such an MPI deployment furthermore extends the type of faults which can be handled.  ... 
arXiv:1804.04481v2 fatcat:zyxw73ummja2hg3igdlvelz6je

KNEM: A generic and scalable kernel-assisted intra-node MPI communication framework

Brice Goglin, Stéphanie Moreaud
2013 Journal of Parallel and Distributed Computing  
This paper presents the KNEM module for the Linux kernel that provides MPI implementations with a flexible and scalable interface for performing kernel-assisted single-copy data transfers between local  ...  MPI operations inside shared-memory computing nodes.  ...  of the extended interface; and Damien Guinier and Sylvain Jeaugey from Bull for helping with understanding the performance behavior of KNEM-enabled MPI layers.  ... 
doi:10.1016/j.jpdc.2012.09.016 fatcat:tju6ioabenbmxefi3atx5cin74

PNMPI tools

Martin Schulz, Bronis R. de Supinski
2007 Proceedings of the 2007 ACM/IEEE conference on Supercomputing - SC '07  
P N MPI extends the PMPI profiling interface to support multiple concurrent PMPI-based tools by enabling users to assemble tool stacks.  ...  Further, we extend P N MPI to platforms without dynamic linking, such as BlueGene/L, and we introduce an extended performance model along with experimental data from microbenchmarks to show that the performance  ...  We load the request module last to make the extended request objects available in the entire tool stack.  ... 
doi:10.1145/1362622.1362663 dblp:conf/sc/SchulzS07 fatcat:nmjb3rvy6rf4hbvwnzqd7i6uje

Extending the Message Passing Interface (MPI) with User-Level Schedules [article]

Derek Schafer, Sheikh Ghafoor, Daniel Holmes, Martin Ruefenacht, Anthony Skjellum
2019 arXiv   pre-print
However, the existing extensibility mechanism in MPI (generalized requests) is not widely utilized and has significant drawbacks.  ...  Extending MPI by composing its operations with user-level operations provides useful integration with the progress engine and completion notification methods of MPI.  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.  ... 
arXiv:1909.11762v1 fatcat:nlbylcwhjrhzncr3olkiwc7pdy

Enabling callback-driven runtime introspection via MPI_T

Marc-André Hermanns, Nathan T. Hjlem, Michael Knobloch, Kathryn Mohror, Martin Schulz
2018 Proceedings of the 25th European MPI Users' Group Meeting on - EuroMPI'18  
Performance tools for MPI currently rely on the PMPI Profiling Interface or the MPI Tools Information Interface, MPI_T, for portably collecting information for performance measurement and analysis.  ...  Understanding the behavior of parallel applications that use the Message Passing Interface (MPI) is critical for optimizing communication performance.  ...  ACKNOWLEDGMENT We thank our colleagues at the MPI Forum and specifically the MPI Forum Tools Working Group for their valuable feedback during the discussion of this interface.  ... 
doi:10.1145/3236367.3236370 dblp:conf/pvm/HermannsHKM018 fatcat:pglhdjte5relthtdde4ecwalqi

Notified Access: Extending Remote Memory Access Programming Models for Producer-Consumer Synchronization

Roberto Belli, Torsten Hoefler
2015 2015 IEEE International Parallel and Distributed Processing Symposium  
Furthermore, we provide guidance for the design of low-level network interfaces to support Notified Access efficiently.  ...  We implement our scheme in an open source MPI-3 RMA library and demonstrate lower overheads (two cache misses) than other point-to-point synchronization mechanisms.  ...  We thank the GASPI team for inspiring discussions about RMA interfaces and Christian Simmendinger for numerous clarifications about the GASPI specification.  ... 
doi:10.1109/ipdps.2015.30 dblp:conf/ipps/BelliH15 fatcat:vihdr3456zd5popdbgdd5h7hdi

MPIs Language Bindings are Holding MPI Back [article]

Martin Ruefenacht, Derek Schafer, Anthony Skjellum, Purushotham V. Bangalore
2021 arXiv   pre-print
Demand is demonstrably strong for this second attempt at language support for C++ in MPI after the original interface, which was added in MPI-2, then was found to lack specific benefits over theC binding  ...  But, MPIs syntax and semantics defined and extended with C and Fortran interfaces that align with the capabilities and limitations of C89 and Fortran-77.Unfortunately, the language-independent specification  ...  Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.  ... 
arXiv:2107.10566v1 fatcat:oei2zfxmw5dmndllrwjggjzyeq

Enabling MPI interoperability through flexible communication endpoints

James Dinan, Pavan Balaji, David Goodell, Douglas Miller, Marc Snir, Rajeev Thakur
2013 Proceedings of the 20th European MPI Users' Group Meeting on - EuroMPI '13  
The current MPI model defines a one-to-one relationship between MPI processes and MPI ranks.  ...  In this paper, we describe an extension to MPI that introduces communication endpoints as a means to relax the one-to-one relationship between processes and threads.  ...  Acknowledgments We thank the members of the MPI Forum, the MPI Forum hybrid working group, and the MPI community for discussions related to this work. This work was supported by the U.S.  ... 
doi:10.1145/2488551.2488553 dblp:conf/pvm/DinanBGMST13 fatcat:uvj244olencmraiy53lw3atyoa

Portals 3.0: protocol building blocks for low overhead communication

R. Brightwell, R. Riesen, B. Lawry, A.B. Maccabe
2002 Proceedings 16th International Parallel and Distributed Processing Symposium  
This paper describes the evolution of the Portals message passing architecture and programming interface from its initial development on tightly-coupled massively parallel platforms to the current implementation  ...  Portals provides the basic building blocks needed for higher-level protocols to implement scalable, low-overhead communication.  ...  Acknowledgments The authors would like to acknowledge Tramm Hudson, Michael Levenhangen, Dena Vigil, and Riley Wilson for their contributions to this research.  ... 
doi:10.1109/ipdps.2002.1016564 dblp:conf/ipps/BrightwellLMR02 fatcat:r3wtnnrj4rc6to5d55dgdkr5xy
« Previous Showing results 1 — 15 out of 9,459 results