Filters








747 Hits in 4.1 sec

Splitting TCP for MPI Applications Executed on Grids

Olivier Glück, Jean-Christophe Mignot
2011 2011 IEEE Ninth International Symposium on Parallel and Distributed Processing with Applications  
Then, we propose MPI5000, a transparent applicative layer between MPI and TCP, using proxies to improve the execution of MPI applications on grids.  ...  In this paper, we first study the interaction between MPI applications and TCP on grids.  ...  Acknowledgment Experiments presented in this paper were carried out using the Grid'5000 experimental testbed, being developed under the INRIA ALADDIN development action with support from CNRS, RENATER  ... 
doi:10.1109/ispa.2011.11 dblp:conf/ispa/GluckM11 fatcat:2wlia4j6drb3vg6yhochatkxxe

Private Virtual Cluster: Infrastructure and Protocol for Instant Grids [chapter]

Ala Rezmerita, Tangui Morlier, Vincent Neri, Franck Cappello
2006 Lecture Notes in Computer Science  
To demonstrate its properties, we have connected with PVC a set of firewall-protected PCs and conducted experiments to evaluate the networking performance and the capability to execute unmodified MPI applications  ...  We propose a new approach called "Instant Grid" (IG), which combines various Grid, P2P and VPN approaches, allowing simple deployment of applications over different administration domains.  ...  The MPIPOV test measures the execution time for the computation of a graphical rendering application parallelized with MPI. MPIPOV uses a masterworker algorithm.  ... 
doi:10.1007/11823285_41 fatcat:5bpxh4mjmzfpzfxglfdvhdjmdi

QCG-OMPI: MPI applications on grids

Emmanuel Agullo, Camille Coti, Thomas Herault, Julien Langou, Sylvain Peyronnet, Ala Rezmerita, Franck Cappello, Jack Dongarra
2011 Future generations computer systems  
A full job management and execution stack has been designed in order to support applications on grids. QosCosGrid uses QCG-OMPI as its MPI implementation.  ...  In this paper we present how QCG-OMPI can execute efficient parallel applications on computational grids. We first present an MPI programming, communication and execution middleware called QCG-OMPI.  ...  Special thanks are due to George Bosilca for his explanations about the implementation of collective operations in Open MPI.  ... 
doi:10.1016/j.future.2010.11.015 fatcat:c7wg66kyqfc37nfi7ypyybwjie

Understanding the Behavior and Performance of Non-blocking Communications in MPI [chapter]

Taher Saif, Manish Parashar
2004 Lecture Notes in Computer Science  
The behavior and performance of MPI non-blocking message passing operations are sensitive to implementation specifics as they are heavily dependant on available system level buffers.  ...  In this paper we investigate the behavior of non-blocking communication primitives provided by popular MPI implementations and propose strategies for these primitives than can reduce processor synchronization  ...  On the SP2 the evaluation run used a base grid size of 256*64*64 and executed 100 iterations.  ... 
doi:10.1007/978-3-540-27866-5_22 fatcat:htstpv22evamfdjcst5he3cyfe

Towards MPI progression layer elimination with TCP and SCTP

B. Penoff, A. Wagner
2006 Proceedings 20th IEEE International Parallel & Distributed Processing Symposium  
MPI middleware glues together the components necessary for execution.  ...  We discuss how this eliminated TCP-based design doesn't scale and show a more scalable design based on the Stream Control Transmission Protocol (SCTP) that has a thinned communication component.  ...  When an MPI application begins, each MPI process executes MPI Init(). Within this call for LAM-TCP, each process initializes a struct that contains the state between itself and each other process.  ... 
doi:10.1109/ipdps.2006.1639497 dblp:conf/ipps/PenoffW06 fatcat:czx356jngbggbphtnvyzsgiiza

Performance characterization of a molecular dynamics code on PC clusters: is there any easy parallelism in CHARMM?

M. Taufer, E. Perathoner, A. Cavalli, A. Caflisch, T. Stricker
2002 Proceedings 16th International Parallel and Distributed Processing Symposium  
An increasing number of researchers are currently looking for affordable and adequate platforms to execute CHARMM or similar codes.  ...  computing platforms such as widely distributed computers (grid).  ...  We are very grateful to Roger Karrer for reading carefully through several drafts of our work.  ... 
doi:10.1109/ipdps.2002.1015505 dblp:conf/ipps/TauferPCCS02 fatcat:lyk2hxhd4vff7av2etmavmjilq

Experiences with Fine-Grained Distributed Supercomputing on a 10G Testbed

Kees Verstoep, Jason Maassen, Henri E. Bal, John W. Romein
2008 2008 Eighth IEEE International Symposium on Cluster Computing and the Grid (CCGRID)  
The class of large-scale distributed applications suitable for running on a grid is therefore larger than previously thought realistic.  ...  By optimizing these aspects, however, a 10G grid can obtain high performance for this type of communication-intensive application.  ...  We thank SURFnet and Nortel for their efforts to provide DAS-3 with a state-of-the-art Optical Private Network.  ... 
doi:10.1109/ccgrid.2008.71 dblp:conf/ccgrid/VerstoepMBR08 fatcat:5mqqo5q4h5ek7ckzq6qt5evrmm

A platform independent communication library for distributed computing

Derek Groen, Steven Rieder, Paola Grosso, Cees de Laat, Simon Portegies Zwart
2010 Procedia Computer Science  
Our library couples several local MPI applications through a long distance network using, for example, optical links.  ...  The only requirements are a C++ compiler and at least one open port to a wide area network on each site.  ...  Also we are grateful to Tomoaki Ishiyama for his work on interfacing GreeM with MPWide and valuable development discussions. We also would like to thank Hans Blom for performing preliminary tests.  ... 
doi:10.1016/j.procs.2010.04.303 fatcat:wbstk4zlznahlhkgb5ieq55x3a

Performance Prediction in a Grid Environment [chapter]

Rosa M. Badia, Francesc Escalé, Edgar Gabriel, Judit Gimenez, Rainer Keller, Jesús Labarta, Matthias S. Müller
2004 Lecture Notes in Computer Science  
This application has been adapted to be run on an heterogeneous computational Grid by means of PACX-MPI. The analysis and optimisation is based on trace driven tools, mainly Dimemas and Vampir.  ...  Knowing the performance of an application in a Grid environment is an important issue in application development and for scheduling decisions.  ...  PACX-MPI The middleware PACX-MPI [5] is an optimized MPI implementation and enables MPI-conforming applications to be run on a heterogeneous computational Grid without requiring the programmer to change  ... 
doi:10.1007/978-3-540-24689-3_32 fatcat:deuxonacmvd7bef4muimfwkg2u

Cloud Computing for Teaching and Learning MPI with Improved Network Communications

Fernando Gomez-Folgar, Antonio J. García-Loureiro, Tomás Fernandez Pena, J. Isaac Zablah, Raúl Valín Ferreiro
2012 International Workshop on Learning Technology for Education in Cloud  
In order to test a cloud infrastructure as a tool for learning MPI, two different scenarios were evaluated in this work using CloudStack: a virtual cluster as a MPI execution environment, and an improved  ...  virtual cluster whose MPI communication latency was improved.  ...  The first one constitutes a virtual cluster for executing MPI applications. A virtual cluster can be defined as a cluster composed by virtual machines.  ... 
dblp:conf/ltec/Gomez-FolgarGPZ12 fatcat:zq3ds3rysjcgvf7bdu6cpnxxlm

Running Parallel Applications with Topology-Aware Grid Middleware

Pavel Bar, Camille Coti, Derek Groen, Thomas Herault, Valentin Kravtsov, Assaf Schuster, Martin Swain
2009 2009 Fifth IEEE International Conference on e-Science  
Results are given based on running the topology-aware applications on the Grid'5000 infrastructure.  ...  The concept of topology-aware grid applications is derived from parallelized computational models of complex systems that are executed on heterogeneous resources, either because they require specialized  ...  of an application executed on a single cluster.  ... 
doi:10.1109/e-science.2009.48 dblp:conf/eScience/BarCGHKSS09 fatcat:lqzthjkcjra4vizv4scqlfjp7u

Efficient MPI Collective Operations for Clusters in Long-and-Fast Networks

Motohiko Matsuda, Tomohiro Kudoh, Yuetsu Kodama, Ryousei Takano, Yutaka Ishikawa
2006 2006 IEEE International Conference on Cluster Computing  
Several MPI systems for Grid environment, in which clusters are connected by wide-area networks, have been proposed.  ...  On the other hand, for cluster MPI systems, a bcast algorithm by van de Geijn, et al and an allreduce algorithm by Rabenseifner have been proposed, which are efficient in a high bi-section bandwidth environment  ...  PACX-MPI [5] , MPICH-G2 [10] , and MagPIe [11] are MPI systems designed for Grid environment.  ... 
doi:10.1109/clustr.2006.311848 dblp:conf/cluster/MatsudaKKTI06 fatcat:ohhq2f3zdvds5mg7i2ujr7c33i

Exploring I/O Virtualization Data Paths for MPI Applications in a Cluster of VMs: A Networking Perspective [chapter]

Anastassios Nanos, Georgios Goumas, Nectarios Koziris
2011 Lecture Notes in Computer Science  
We study the network behavior of MPI applications.  ...  Our goal is to: (a) explore the implications of alternative data paths between applications and network hardware and (b) specify optimized solutions for scientific applications that put pressure on network  ...  Our agenda also consists of evaluating higher level frameworks for application parallelism based on MapReduce and its extensions in VM execution environments.  ... 
doi:10.1007/978-3-642-21878-1_82 fatcat:xwesmawg6zcu3i7drmwcqdhyry

Distributed Multiscale Computing with MUSCLE 2, the Multiscale Coupling Library and Environment [article]

Joris Borgdorff, Mariusz Mamonski, Bartosz Bosak, Krzysztof Kurowski, Mohamed Ben Belgacem, Bastien Chopard, Derek Groen, Peter V. Coveney, Alfons G. Hoekstra
2013 arXiv   pre-print
The local throughput of MPI is about two times higher, so very tightly coupled code should use MPI as a single submodel of MUSCLE 2; the distributed performance of GridFTP is lower, especially for small  ...  This multiscale component-based execution environment has a simple to use Java, C++, C, Python and Fortran API, compatible with MPI, OpenMP and threading codes.  ...  The work made use of computational resources provided by PL-Grid (Zeus cluster) and by hepia in Geneva (Gordias cluster).  ... 
arXiv:1311.5740v1 fatcat:csdwbyvyejfwddedrd4yu7pe2i

A lightweight communication library for distributed computing

Derek Groen, Steven Rieder, Paola Grosso, Cees de Laat, Simon Portegies Zwart
2010 Computational Science & Discovery  
Our library allows coupling of several local MPI applications through a long distance network and is specifically optimized for such communications.  ...  The only requirements are a C++ compiler and at least one open port to a wide area network on each site.  ...  Also we are grateful to Tomoaki Ishiyama for his work on interfacing and running the TreePM code with MPWide and his valuable feedback during development.  ... 
doi:10.1088/1749-4699/3/1/015002 fatcat:5a2uqieefnayddeqna7olfo64u
« Previous Showing results 1 — 15 out of 747 results