A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2012; you can also visit the original URL.
The file type is application/pdf
.
Filters
Enabling large-scale scientific workflows on petascale resources using MPI master/worker
2012
Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment on Bridging from the eXtreme to the campus and beyond - XSEDE '12
Finally, we describe how the system is being used to enable the execution of a very large seismic hazard analysis application on XSEDE resources. ...
In this paper we describe a new approach to executing large, fine-grained workflows on distributed petascale systems. ...
ACKNOWLEDGEMENTS We would like to thank the XSEDE user support staff and especially Matt McKenzie at NICS for helping us setup and debug the CyberShake workflows on Kraken. ...
doi:10.1145/2335755.2335846
fatcat:tl3il2esvncnvlf7w2epbrok54
Many-Task Computing and Blue Waters
[article]
2012
arXiv
pre-print
HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. ...
The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. ...
Acknowledgments This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (award number OCI 0725070) and the state of Illinois ...
arXiv:1202.3943v1
fatcat:c4ou6svp25bzdgcnjc23nqgwiu
Petascale Tcl with NAMD, VMD, and Swift/T
2014
2014 First Workshop for High Performance Technical Computing in Dynamic Languages
MPI library on which it depends. ...
We now demonstrate the integration of the Swift/T high-performance parallel scripting language to enable high-level data flow programming in NAMD and VMD. ...
This work is also part of the Petascale Computational Resource (PRAC) grant "The Computational Microscope", which is supported by the National Science Foundation (award number OCI-0832673). ...
doi:10.1109/hptcdl.2014.7
dblp:conf/sc/PhillipsSVAWWS14
fatcat:7lr5vmqlifbilizv3wj4xtuc6m
Multi-SPMD Programming Model with YML and XcalableMP
[chapter]
2020
XcalableMP PGAS Programming Language
It has been evident that simple SPMD programs such as MPI, XMP, or hybrid programs such as OpenMP/MPI cannot exploit the postpeta- or exascale systems efficiently due to the increasing complexity of applications ...
The mSPMD programming model has been proposed to realize scalable applications on huge and hierarchical systems. ...
The OmniRPC supports a master-worker programming model, where remote serial programs (rexs) are executed by exec, rsh or ssh. ...
doi:10.1007/978-981-15-7683-6_9
fatcat:za3sx7wyq5eotcfkpc2q3ohj2i
On the use of burst buffers for accelerating data-intensive scientific workflows
2017
Proceedings of the 12th Workshop on Workflows in Support of Large-Scale Science - WORKS '17
Science applications frequently produce and consume large volumes of data, but delivering this data to and from compute resources can be challenging, as parallel file system performance is not keeping ...
By running a subset of the SCEC CyberShake workflow, a production seismic hazard analysis workflow, we find that using burst buffers offers read and write improvements of about an order of magnitude, and ...
Scientific workflows are a mainstream solution to process complex and large-scale computations involving numerous operations on large datasets efficiently. ...
doi:10.1145/3150994.3151000
dblp:conf/sc/SilvaCD17
fatcat:tomvvnfqgnbdlbjyl67x4u7z5y
Physics-based seismic hazard analysis on petascale heterogeneous supercomputers
2013
Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '13
These performance improvements to critical scientific application software, coupled with improved co-scheduling capabilities of our workflow-managed systems, make a statewide hazard model a goal reachable ...
The performance improvements of GPU-based AWP are expected to save millions of core-hours over the next few years as physics-based seismic hazard analysis is developed using heterogeneous petascale supercomputers ...
acceleration values for each of about 410,000 earthquakes in an MPI master/worker environment. ...
doi:10.1145/2503210.2503300
dblp:conf/sc/CuiPOZWCLGCCSDMJ13
fatcat:3klm3s6hqjhgblxcwybum7tfza
Pegasus, a workflow management system for science automation
2015
Future generations computer systems
Modern science often requires the execution of large-scale, multi-stage simulation and data analysis pipelines to enable the study of complex systems. ...
on distributed computational resources: campus clusters, national cyberinfrastructures, and commercial and academic clouds. ...
This research was done using resources provided by the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science. ...
doi:10.1016/j.future.2014.10.008
fatcat:u5lbouuekvduhfdvdi7phfyn7i
Executing dynamic heterogeneous workloads on Blue Waters with RADICAL-Pilot
2016
Zenodo
These workloads can benefit from being executed at scale on HPC resources, but a tension exists between the workloads' resource utilization requirements and the capabilities of the HPC system software ...
RADICAL-Pilot is a scalable and portable pilot system that enables the execution of such diverse workloads. ...
This work is also part of the "The Power of Many: Scalable Compute and Data-Intensive Science on Blue Waters", PRAC allocation support by the National Science Foundation (NSF-1516469). ...
doi:10.5281/zenodo.3373740
fatcat:psmiyicmxjcsrhms6mr7vuwmgi
Run-Time Exploitation of Application Dynamism for Energy-Efficient Exascale Computing (READEX)
2015
2015 IEEE 18th International Conference on Computational Science and Engineering
Efficiently utilizing the resources provided on current petascale and future exascale systems will be a challenging task, potentially causing a large amount of underutilized resources and wasted energy ...
However, using an automatic optimization approach, application dynamism can be analyzed at design-time and used to optimize system configurations at run-time. ...
PTF provides a number of predefined tuning plugins, including: • Dynamic Voltage Frequency Scaling, • Compiler flags selection, • MPI run-time settings, • OpenMP parallelism capping, • MPI master-worker ...
doi:10.1109/cse.2015.55
dblp:conf/cse/OleynikGSKN15
fatcat:bovu4aqq6jfh7lwsoloaiflwmm
Interlanguage parallel scripting for distributed-memory scientific computing
2015
Proceedings of the 10th Workshop on Workflows in Support of Large-Scale Science - WORKS '15
However, deploying scripted applications on large-scale parallel computer systems such as the IBM Blue Gene/Q or Cray XE6 is a challenge because of issues including operating system limitations, interoperability ...
Scripting languages such as Python and R have been widely adopted as tools for the development of scientific software because of the expressiveness of the languages and their available libraries. ...
We thank Ray Osborn for collaboration on NeXus, Victor Zavala for collaboration on power grid, and Reinhard Neder for collaboration on DISCUS. ...
doi:10.1145/2822332.2822338
dblp:conf/sc/WozniakAMKWF15
fatcat:q4dpgsmusra4bibelpkhijrbd4
The scaling of many-task computing approaches in python on cluster supercomputers
2013
2013 IEEE International Conference on Cluster Computing (CLUSTER)
We describe these packages in detail and compare their features as applied to many-task computing on a cluster, including a scaling study using over 12,000 cores and several thousand tasks. ...
We use mpi4py as a baseline for our comparisons. ...
[1] have extended the Falkon task execution framework for petascale systems. Rynge et. al. [32] apply an MPI based master/worker approach to petascale system for earthquake data analysis.
V. ...
doi:10.1109/cluster.2013.6702678
dblp:conf/cluster/LunacekBH13
fatcat:4y6t3uwtqrfodigqghxady42wy
D7.2.2: Final Report on Collaboration with Communities
2012
Zenodo
Task 7.2 issued an effective collaboration with Tasks 7.5 and 7.6 to afford different specific aspects, like hybrid parallelization or large dataset I/O optimization. ...
The present document describes the work carried out during the second year of activity of Task 7.2-1IP "Application Enabling with communities". ...
A large, so called Ω-matrix is replicated over all the MPI tasks, and in very large scale calculations there will not be enough memory to store the matrix on all processes. ...
doi:10.5281/zenodo.6553019
fatcat:bqbdsgji4vgfjb3lurtm6ydezm
Distributed workflows with Jupyter
2021
Future generations computer systems
portable implementation working on hybrid Cloud-HPC infrastructures. ...
The proposed Jupyter-workflow (Jw) system is evaluated on common scenarios for High Performance Computing (HPC) and Cloud, showing its potential in lowering the barriers between prototypical Notebooks ...
Interactive simulation at scale: running quantum ESPRESSO on Jupyter In order to assess the Jw capabilities to enable interactive simulations of realistic, large-scale systems, we implement a Notebook ...
doi:10.1016/j.future.2021.10.007
fatcat:2al5dpxqmrgeboqxgkgxbefxga
Efficient clustered server-side data analysis workflows using SWAMP
2009
Earth Science Informatics
Technology continues to enable scientists to set new records in data collection and production, intensifying a need for large scale tools to efficiently process and analyze the growing mountain of data ...
Built-in script compilation isolates file accesses and generates workflows, while a cluster-capable execution engine partitions and executes the resulting workflow. ...
Acknowledgements We acknowledge the modeling groups, the Program for Climate Model Diagnosis and Intercomparison (PCMDI) and the WCRP's Working Group on Coupled Modelling (WGCM) for their roles in making ...
doi:10.1007/s12145-009-0021-z
fatcat:mkncfst4ofac7g5amnqpzxlvim
Hierarchical parallelisation of functional renormalisation group calculations — hp-fRG
2016
Computer Physics Communications
We exploit three levels of parallelisation: Distributed computing by means of Message Passing (MPI), shared-memory computing using OpenMP, and vectorisation by means of SIMD units (single-instruction-multiple-data ...
In this work we report on a multi-level parallelisation of the underlying computational machinery and show that this can speed up the code by several orders of magnitude. ...
of master-worker parallelism is given in [22] . ...
doi:10.1016/j.cpc.2016.05.024
fatcat:77jwf2ud6fh6vjnnllhejvxxj4
« Previous
Showing results 1 — 15 out of 23 results