A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2021; you can also visit the original URL.
The file type is application/pdf
.
Filters
Analyzing Scientific Data Sharing Patterns for In-network Data Caching
2020
Proceedings of the 2021 on Systems and Network Telemetry and Analytics
It also analyzes data access patterns in applications and the impacts of caching nodes on the regional data repository. ...
In-network data caching for the shared data has shown to reduce the redundant data transfers and consequently save network traffic volume. ...
The authors also gratefully acknowledge Adam Slagell, Anne White, Eli Dart, Eric Pouyoul, George Robb, Goran Pejovic, Kate Robinson, Yatish Kumar, Dima Mishin, Justas Balcas, and Michael Sinatra for their ...
doi:10.1145/3452411.3464441
fatcat:66i6yowklbef5oxzc6rlnbcmli
INFN Tier–1: a distributed site
2019
EPJ Web of Conferences
The INFN Tier-1 center at CNAF has been extended in 2016 and 2017 in order to include a small amount of resources (∼ 22 kHS06 corresponding to ∼ 10% of the CNAF pledges for LHC in 2017) physically located ...
In this contribution we describe the issues and the results of the production configuration, focusing both on the management aspects and on the performance provided to end-users. ...
for LHC computing with small technical changes, mostly involving resource policies. ...
doi:10.1051/epjconf/201921408002
fatcat:rd4ghjfwjvhnld5jh3kya336fm
Analyzing scientific data sharing patterns for in-network data caching
[article]
2021
arXiv
pre-print
It also analyzes data access patterns in applications and the impacts of caching nodes on the regional data repository. ...
In-network data caching for the shared data has shown to reduce the redundant data transfers and consequently save network traffic volume. ...
This is especially relevant to the High Energy Physics (HEP) community because the LHC instrument is located at CERN in Switzerland, while Tier-1 sites in the US for the ATLAS and CMS experiments are located ...
arXiv:2105.00964v1
fatcat:w7m2hdnexbga5npumgg7a4pxhy
The Quest to solve the HL-LHC data access puzzle
2020
EPJ Web of Conferences
In addition we will present the results of the analysis and modelling efforts based on data access traces of the experiments. ...
In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account ...
In this way the space on disk at the computing sites is optimised for data being actively used and this can potentially be completely delegated to an stateless cache. ...
doi:10.1051/epjconf/202024504027
fatcat:tamixy5eanh4dlzy7hi5gfzsgq
Deploying in-network caches in support of distributed scientific data sharing
[article]
2022
arXiv
pre-print
We include thoughts on possible future deployment models involving caching node installations at the edge along with methods to scale our approach. ...
To that end, we describe the use of in-network caching service deployments as a means to improve application performance and preserve available network bandwidth in a high energy physics data distribution ...
This is especially relevant to the High Energy Physics (HEP) community with the LHC instrument at CERN and Tier-1 sites for the ATLAS at Brookhaven National Laboratory and CMS experiments at Fermi National ...
arXiv:2203.06843v1
fatcat:lwk26kboezgyzlqfp762ui43ou
Enabling ATLAS big data processing on Piz Daint at CSCS
2020
EPJ Web of Conferences
This will require some radical changes to the computing models for the data processing of the LHC experiments. ...
We report on the technical challenges and solutions adopted to enable the processing of the ATLAS experiment data on the European flagship HPC Piz Daint at CSCS, now acting as a pledged WLCG Tier-2 centre ...
Acknowledgments We acknowledge the support of the Swiss National Science Foundation and thank CSCS for the provision of the integration systems. ...
doi:10.1051/epjconf/202024509005
fatcat:2sobxyicgbh2zishph5ornub2m
The CMS data aggregation system
2010
Procedia Computer Science
Based on the use cases of the CMS experiment, we have performed a set of detailed, large scale tests the results of which we present in this paper. ...
Meta-data plays a significant role in large modern enterprises, research experiments and digital libraries where it comes from many different sources and is distributed in a variety of digital formats. ...
While designed for the CMS High-Energy Physics experiment at LHC, the strategies and technology could be used elsewhere. The rest of this paper is organized as follows. ...
doi:10.1016/j.procs.2010.04.172
fatcat:u5m2debfsnexplralnti64c674
CMS computing upgrade and evolution
2013
2013 IEEE Nuclear Science Symposium and Medical Imaging Conference (2013 NSS/MIC)
The computing system of the CMS experiment will need to face new challenges from 2015 on when LHC restarts with an anticipated higher detector output rate and event complexity, but with only a limited ...
CMS is improving the data storage, distribution and access as well as the processing efficiency. ...
The Compact Muon Solenoid experiment(CMS) [3] is one of the 4 experiments at the LHC. The institutions participating in CMS contribute with about a third of the WLCG computing resources. ...
doi:10.1109/nssmic.2013.6829578
fatcat:ltjvdywfgzaerpfyfootv7egea
First experiences with a portable analysis infrastructure for LHC at INFN
2021
EPJ Web of Conferences
analysis environment for the CMS experiment. ...
At the Italian National Institute for Nuclear Physics (INFN) a portable software stack for analysis has been proposed, based on cloud-native tools and capable of providing users with a fully integrated ...
provider, as implemented by INDIGO IAM; in the current testbed we rely on the dedicated service deployed for the CMS experiment at CERN. ...
doi:10.1051/epjconf/202125102045
fatcat:ubtyoepjsveqthwq3eze5tysdm
Extending the farm on external sites: the INFN Tier-1 experience
2017
Journal of Physics, Conference Series
The Tier-1 at CNAF is the main INFN computing facility offering computing and storage resources to more than 30 different scientific collaborations including the 4 experiments at the LHC. ...
It is also foreseen a huge increase in computing needs in the following years mainly driven by the experiments at the LHC (especially starting with the run 3 from 2021) but also by other upcoming experiments ...
On the long term, we have anyway observed that efficiency is better for I/O demanding jobs running at CNAF, noticeably for Atlas and CMS. ...
doi:10.1088/1742-6596/898/8/082018
fatcat:nc2bk44urbeupjfxuklkcuauw4
Analysis and modeling of data access patterns in ATLAS and CMS
2019
Zenodo
In this work we derived usage patterns based on traces and logs from the data and workflow management systems of CMS and ATLAS, and simulated the impact of different caching and data lifecycle management ...
Data corresponding to one year of operation and covering all Grid sites have been the basis for the analysis. ...
Introduction • Data access/storage by production and analysis jobs drive the design and cost of data management systems for LHC computing • At the scale of HL-LHC the current storage and access strategies ...
doi:10.5281/zenodo.3598836
fatcat:bhch2gwtkjaytphnu6qui53usm
CMS computing operations during run 1
2014
Journal of Physics, Conference Series
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. ...
We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015. ...
Acknowledgements We would like to thank the funding agencies supporting the CMS experiment and the LHC computing efforts. ...
doi:10.1088/1742-6596/513/3/032040
fatcat:2pl7f2lfx5g5fopbdjv2ak2zpm
A federated Xrootd cache
2018
Journal of Physics, Conference Series
With the shift in the LHC experiments from the computing tiered model where data was prefetched and stored at the computing site towards a bring data on the fly, model came an opportunity. ...
There is some fine tuning towards and scaling tests performed to make it fit for the CMS Analysis case. ...
Acknowledgments This work is supported in part by the National Science Foundation through awards PHY-1148698 and PHY-1624356. ...
doi:10.1088/1742-6596/1085/3/032025
fatcat:ddbckeqct5cs3hsnuwfsjzlmxq
Scalable Database Access Technologies for ATLAS Distributed Computing
[article]
2009
arXiv
pre-print
A main focus of ATLAS database operations is on the worldwide distribution of the Conditions DB data, which are necessary for every ATLAS data processing job. ...
To collect experience and provide input for a best choice of technologies, several promising options for efficient database access in user analysis were evaluated successfully. ...
COOL was designed as a common technology for experiments at the Large Hadron Collider (LHC). ...
arXiv:0910.0097v2
fatcat:xfmj54oblrflrkwms2otatb7d4
HTTP as a Data Access Protocol: Trials with XrootD in CMS's AAA Project
2017
Journal of Physics, Conference Series
The testbed consists of a set of machines at the Caltech Tier2 that improve the support infrastructure for data federations at CMS. ...
An initial testbed at Caltech has been built and changes in the CMS software (CMSSW) are being implemented in order to improve HTTP support. ...
Acknowledgments The authors would like to acknowledge the support of the software and computing program of the CMS experiment at the LHC. ...
doi:10.1088/1742-6596/898/6/062042
fatcat:ibndsulrhjcuxmipr37apb2bhi
« Previous
Showing results 1 — 15 out of 599 results