A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
The Petascale DTN Project: High Performance Data Transfer for HPC Facilities
[article]
2021
arXiv
pre-print
This paper describes the Petascale DTN Project, an effort undertaken by four HPC facilities, which succeeded in achieving routine data transfer rates of over 1PB/week between the facilities. ...
We describe the design and configuration of the Data Transfer Node (DTN) clusters used for large-scale data transfers at these facilities, the software tools used, and the performance tuning that enabled ...
It is our hope that the HPC facility enhancements he helped realize will one day help to cure cancer for everyone. ...
arXiv:2105.12880v2
fatcat:4642f6e37zasxadkajsmjad2ku
SCISPACE: A Scientific Collaboration Workspace for File Systems in Geo-Distributed HPC Data Centers
[article]
2018
arXiv
pre-print
It improves information and resource sharing for joint simulation and analysis between the HPC data centers. ...
The evaluation results show average 36\% performance boost when the proposed native-data access is employed in collaborations. ...
Particularly, the high-speed terabit network connections between HPC data centers expedite such collaborations. DOE's ESnet currently supports 100 Gb/s of data transfers between DOE facilities. ...
arXiv:1803.08228v1
fatcat:cbbhqiijcjbmtg3w6gus3owymy
Towards Autonomic Science Infrastructure
2018
Proceedings of the 1st International Workshop on Autonomous Infrastructure for Science - AI-Science'18
We propose a hierarchical architecture that builds on the earlier proposals for autonomic computing systems to realize an autonomous science infrastructure. ...
We review recent work that uses machine learning algorithms to improve computer system performance, identify gaps and open issues. ...
We also would like to thank the anonymous reviewers for their helpful comments. ...
doi:10.1145/3217197.3217205
dblp:conf/hpdc/KettimuthuLFBSW18
fatcat:q465b3cyibarnowssx4jny6jvu
High-Throughput Computing on High-Performance Platforms: A Case Study
[article]
2017
arXiv
pre-print
This paper is a case study of how the ATLAS experiment has embraced Titan---a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production ...
In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. ...
, in particular high-performance computers (HPC). ...
arXiv:1704.00978v2
fatcat:xsbd4vmq3nc5rnomay2dhihdvy
Data Transfer and Network Services management for Domain Science Workflows
[article]
2022
arXiv
pre-print
As a result data transfers can introduce a degree of uncertainty in workflow operations, and the associated lack of network information does not allow for either the workflow operations or the network ...
There is little ability for applications to interact with the network to exchange information, negotiate performance parameters, discover expected performance metrics, or receive status/troubleshooting ...
The innovative aspects of the project are the maintenance and performance of a collaborative (shared & common) petascale national facility over hundreds of autonomous computing sites, for many thousands ...
arXiv:2203.08280v2
fatcat:w52ay7bkbbf3xp3szg4ccx44aq
Real-Time XFEL Data Analysis at SLAC and NERSC: a Trial Run of Nascent Exascale Experimental Data Analysis
[article]
2021
arXiv
pre-print
XFEL experiments are a challenge to computing in two ways: i) due to the high cost of running XFELs, a fast turnaround time from data acquisition to data analysis is essential to make informed decisions ...
We achieved real time data analysis with a turnaround time from data acquisition to full molecular reconstruction in as little as 10 min -- sufficient time for the experiment's operators to make informed ...
The movers also perform the data transfer to the remote HPC sites currently supporting NERSC and the SLAC Shared Scientific Data Facility (SDF). ...
arXiv:2106.11469v2
fatcat:7nsdvjxpd5anliptkf4h6fjkgu
PROJECT SUMMARY Overview
unpublished
HPC system installed at the University of Colorado. ...
venues, including the RMACC annual HPC symposium, Westnet, XSEDE, ACE-Ref, and CaRC. ...
scripts, campus HPC data transfer nodes (DTN), and file system parameters), we will enable researchers to easily map their data workflows onto CloudLab, HPC infrastructures, and ultimately the national ...
fatcat:qa37w6dubjhyldyn2ofalondai
HEPiX Fall 2017 Workshop at KEK, Tsukuba, Japan - Trip Report
[article]
2017
and notes from the HEPiX Fall 2017 workshop ...
HEP compu ng is being integrated into the University of Melbourne HPC, which is a petascale compu ng project. A 78 GPU nodes cluster is replacing the Blue Gene cluster. ...
Some show cases: transferring 74 Gb/s in a DTN-based network, trying so waredefined exchanges, transferring airline data. The goal is now to leave demo mode and move some use cases into produc on. ...
doi:10.17181/cern.kgnr.yh1w
fatcat:27fwfwc6pzeo5mpaf72bumwfdq
FUTURE COMPUTING 2012 The Fourth International Conference on Future Computational Technologies and Applications FUTURE COMPUTING 2012 Committee FUTURE COMPUTING Advisory Chairs FUTURE COMPUTING 2012 Technical Program Committee
2012
Foreword The Fourth International Conference on Future Computational Technologies and Applications (FUTURE COMPUTING 2012
unpublished
The creation of such a broad and high quality conference program would not have been possible without their involvement. ...
We are grateful to the members of the FUTURE COMPUTING 2012 organizing committee for their help in handling the logistics and for their work to make this professional meeting a success. ...
In the model, α, β and m stands for the time for transferring one unit data, the latency of communication and the amount of data to be transferred, respectively. α and β are calculated at the beginning ...
fatcat:st7dioxwsjhuvpcgxavn5b4ez4