Filters








49 Hits in 10.8 sec

Establishing Applicability of SSDs to LHC Tier-2 Hardware Configuration

Samuel C Skipsey, Wahid Bhimji, Mike Kenyon
2011 Journal of Physics, Conference Series  
We estimate the effectiveness of affordable SSDs in the context of worker nodes, on a large Tier-2 production setup using both low level tools and real LHC I/O intensive data analysis jobs comparing and  ...  We consider the applicability of each solution in the context of its price/performance metrics, with an eye on the pragmatic issues facing Tier-2 provision and upgrades  ...  Acknowledgments The authors would like to thank Dan van der Ster (and Johannes Elmsheuser) for providing the HammerCloud framework necessary for this research, and allowing the large number of rapid tests  ... 
doi:10.1088/1742-6596/331/5/052019 fatcat:5t4tksjqenbphmrhozvkqmcyau

Emerging Computing Technologies in High Energy Physics [article]

Amir Farbin
2009 arXiv   pre-print
While in the early 90s High Energy Physics (HEP) lead the computing industry by establishing the HTTP protocol and the first web-servers, the long time-scale for planning and building modern HEP experiments  ...  I will overview some of the fundamental computing problems in HEP computing and then present the current state and future potential of employing new computing technologies in addressing these problems.  ...  In order to observe the benefits of the fast random access of SSDs, we ran eight instances of these applications with the data stored on a single HD or SSD.  ... 
arXiv:0910.3440v1 fatcat:avvkikyqznbrfhcz6abmshdoje

Running and testing GRID services with Puppet at GRIF- IRFU

S Ferry, F Schaer, JP Meyer
2015 Journal of Physics, Conference Series  
GRIF is a distributed Tiers 2 centre, made of 6 different centres in the Paris region, and serving many VOs.  ...  One of the sub-sites, GRIF-IRFU held and maintained in the CEA-Saclay centre, moved a year ago, to a configuration management using Puppet.  ...  The goal is to be part of the WLCG project as a Tiers 2 site for the LHC experiments, as well as to provide computing and storage resources for non LHC experiments included in EGI project.  ... 
doi:10.1088/1742-6596/664/5/052013 fatcat:5jnqtudwofcj3l3tpbqj3w2mju

Operational security, threat intelligence & distributed computing: the WLCG Security Operations Center Working Group

David Crooks, Liviu Vâlsan, Kashif Mohammad, Shawn McKee, Paul Clark, Adam Boutcher, Adam Padée, Michał Wójcik, Henryk Giemza, Bas Kreukniet, A. Forti, L. Betev (+3 others)
2019 EPJ Web of Conferences  
The work of the Worldwide LHC Computing Grid (WLCG) Security Operations Center (SOC) Working Group (WG) [1] is to pursue these goals to form a reference design (or guidelines) for WLCG sites of different  ...  The strategy of the group is to identify necessary components - starting with threat intelligence (MISP [2]) and network intrusion detection (Bro [3]), building a working model over time.  ...  Michigan WLCG Tier-2 The configuration of the SOC at Michigan relies upon a local Elasticsearch cluster, originally dedicated to capturing system and device logging.  ... 
doi:10.1051/epjconf/201921403029 fatcat:yf3tiwn6dfhzngqubxxcijo5c4

dCache, agile adoption of storage technology

A P Millar, T Baranova, G Behrmann, C Bernardt, P Fuhrmann, D O Litvintsev, T Mkrtchyan, A Petersen, A Rossi, K Schwank
2012 Journal of Physics, Conference Series  
As with disk-tape, it is often too expensive to store all data on SSDs. Instead, dCache is investigating adding support for SSDs as a third tier.  ...  SSDs and 3-tier model dCache was initially developed to improve access to tape by caching files that were requested.  ... 
doi:10.1088/1742-6596/396/3/032077 fatcat:6wptejah4zeztghenjid46mtke

First experiences with a portable analysis infrastructure for LHC at INFN

Diego Ciangottini, Tommaso Boccali, Andrea Ceccanti, Daniele Spiga, Davide Salomoni, Tommaso Tedeschi, Mirco Tracolli, C. Biscarat, S. Campana, B. Hegner, S. Roiser, C.I. Rovelli (+1 others)
2021 EPJ Web of Conferences  
LHC communities, in terms of total resource needs, user satisfaction and in the reduction of end time to publication.  ...  The challenges proposed by the HL-LHC era are not limited to the sheer amount of data to be processed: the capability of optimizing the analyser's experience will also bring important benefits for the  ...  A key to the success is now to evolve the system in order to establish feasibility for HL-LHC workflows at scale and to this end the plan is to provide an instance of the facility for the physicists at  ... 
doi:10.1051/epjconf/202125102045 fatcat:ubtyoepjsveqthwq3eze5tysdm

Integrating HPC into an agile and cloud-focused environment at CERN

Pablo Llopis, Carolina Lindqvist, Nils Høimyr, Dan van der Ster, Philippe Ganz, A. Forti, L. Betev, M. Litmaath, O. Smirnova, P. Hristov
2019 EPJ Web of Conferences  
Our approach has been to integrate the HPC facilities as far as possible with the HTC services in our data centre, and to take advantage of an agile infrastructure for updates, configuration and deployment  ...  Experience and benchmarks of MPI applications across Infiniband with shared storage on CephFS is discussed, as well the setup of the SLURM scheduler for HPC jobs with a provision for backfill of HTC workloads  ...  Acknowledgements We would like to thank the anonymous reviewers for improving the quality of this submission.  ... 
doi:10.1051/epjconf/201921407025 fatcat:wpts56ywlramhbhhibexylxcfi

The Widening Gulf between Genomics Data Generation and Consumption: A Practical Guide to Big Data Transfer Technology

Frank A. Feltus, Joseph R. Breen, Juan Deng, Ryan S. Izard, Christopher A. Konger, Walter B. Ligon, Don Preuss, Kuang-Ching Wang
2015 Bioinformatics and Biology Insights  
Specifically, we discuss four key areas: 1) data transfer networks, protocols, and applications; 2) data transfer security including encryption, access, firewalls, and the Science DMZ; 3) data flow control  ...  A primary intention of this article is to orient the biologist in key aspects of the data transfer process in order to frame their genomics-oriented needs to enterprise IT professionals.  ...  Acknowledgments We would like to thank the National Science Foundation for funding of our research efforts through these awards: #58501934 (JRB), #1245936 (KCW, FAF), #1447771 (WBL, FAF), and #1443040  ... 
doi:10.4137/bbi.s28988 pmid:26568680 pmcid:PMC4636112 fatcat:xauypfow4jht5ersycrbislpcm

Planning the Future of U.S. Particle Physics (Snowmass 2013): Chapter 9: Computing [article]

L. A. T. Bauerdick, S. Gottlieb, G. Bell, K. Bloom, T. Blum, D. Brown, M. Butler, A. Connolly, E. Cormier, P. Elmer, M. Ernst, I. Fisk, G. Fuller, R. Gerber (+16 others)
2014 arXiv   pre-print
These reports present the results of the 2013 Community Summer Study of the APS Division of Particles and Fields ("Snowmass 2013") on the future program of particle physics in the U.S.  ...  computing required in all areas of particle physics.  ...  The most likely high-level architecture for scientific analyses will be a hierarchy of tiers, in some ways analogous to the LHC computing model, where the (top) Tier 0 data is a complete capture of all  ... 
arXiv:1401.6117v1 fatcat:czmytmguqvcnrdm4q2ycjco4zm

A Roadmap for HEP Software and Computing R&D for the 2020s

Johannes Albrecht, Antonio Augusto Alves, Guilherme Amadio, Giuseppe Andronico, Nguyen Anh-Ky, Laurent Aphecetche, John Apostolakis, Makoto Asai, Luca Atzori, Marian Babik, Giuseppe Bagliesi, Marilena Bandieramonte (+298 others)
2019 Computing and Software for Big Science  
This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones.  ...  Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded.  ...  Watts 110 · Torre Wenaus 2 · Sandro Wenzel 5 · Mike Williams 53 · Frank Winklmeier 58 · Christoph Wissing 13 · Frank Wuerthwein 75 · Benjamin Wynne 83 · Zhang Xiaomei 23 · Wei Yang 63 · Efe Yazgan 23  ... 
doi:10.1007/s41781-018-0018-8 fatcat:xuvdsxgmjfe7zah4vbvpazao2i

Optimized access to ATLAS analyses within the ROOT framework

Umesh Dharmaji Worlikar, Christian Zeitnitz, Torsten Harenberg
2019 Zenodo  
Master's thesis submitted in fulfillment of the requirements for the degree of MSc.  ...  The subsequent performance testing and profiling were thus carried out on the same hardware configuration namely, SSD with warm cache mode.  ...  Based on ATLAS Software Tutorial [7] copied to the Tier-2 facilities for further analysis.  ... 
doi:10.5281/zenodo.2615193 fatcat:j7uf5mipezdfbfp34e6ijmwrt4

Business Model With Alternative Scenarios (D4.1)

Jakob Luettgau, Julian Kunkel, Jens Jensen, Bryan Lawrence
2017 Zenodo  
The report begins with a description of how climate and weather applications make use of HPC systems, the arising challenges and data requirements, and some trends that are likely to impact future data  ...  The report concentrates on identifying and evaluating the interplay of important cost factors, along with an introduction to relevant (storage and data movement) hardware and software technology, terminology  ...  configuration of object storage deployed (hardware, software, and amount of erasure coding).  ... 
doi:10.5281/zenodo.1228749 fatcat:fdwtwtfcwjhhbkiusam6njs26a

The Alice Experiment at the CERN LHC [chapter]

P. Kuijer
2003 Proceedings of the 31st International Conference on High Energy Physics Ichep 2002  
Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition  ...  ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC  ...  The configuration database holds the data needed for the configuration of the whole control system; this includes the configuration of the control system itself, configuration of hardware devices such  ... 
doi:10.1016/b978-0-444-51343-4.50019-3 fatcat:m63gp5a6gnef7iql3obugr54ae

The ALICE experiment at the CERN LHC

The ALICE Collaboration, K Aamodt, A Abrahantes Quintana, R Achenbach, S Acounis, D Adamová, C Adler, M Aggarwal, F Agnese, G Aglieri Rinella, Z Ahammed, A Ahmad (+1150 others)
2008 Journal of Instrumentation  
Most detector systems are scheduled to be installed and ready for data taking by mid-2008 when the LHC is scheduled to start operation, with the exception of parts of the Photon Spectrometer (PHOS), Transition  ...  ALICE will also take data with proton beams at the top LHC energy to collect reference data for the heavy-ion programme and to address several QCD topics for which ALICE is complementary to the other LHC  ...  The configuration database holds the data needed for the configuration of the whole control system; this includes the configuration of the control system itself, configuration of hardware JINST 3 S08002  ... 
doi:10.1088/1748-0221/3/08/s08002 fatcat:w2fjx7g6qvf5tpnomsl36l5tfy

PUNCH4NFDI Consortium Proposal

The PUNCH4NFDI Consortium
2020 Zenodo  
fields of science.  ...  Organised in 7 task areas, the consortium ultimately aims at establishing FAIR digital research products for its communities and beyond, spending their entire lifecycle inside a "science data platform"  ...  These policies can then e.g. enforce the placement of hot data objects on an SSD tier and the placement of cold data on tape.  ... 
doi:10.5281/zenodo.5722894 fatcat:mfcvk55kqvgsthkp6dnxqyiyve
« Previous Showing results 1 — 15 out of 49 results