A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2018; you can also visit the original URL.
The file type is application/pdf
.
Filters
Designing Computing System Architecture and Models for the HL-LHC era
2015
Journal of Physics, Conference Series
This paper describes a programme to study the computing model in CMS after the next long shutdown near the end of the decade. ...
The computing model chosen by CMS for the LHC startup used a distributed data management architecture [2] which placed datasets statically at sites. ...
The reliability of all WLCG computer centres has greatly improved through the experience gained during LHC Run 1. More sophisticated data management and access models are thus possible. ...
doi:10.1088/1742-6596/664/3/032010
fatcat:su5rkrq26fcb5k23l6kpf6icfm
Massively parallel computing at the Large Hadron Collider up to the HL-LHC
2015
Journal of Instrumentation
As the Large Hadron Collider (LHC) continues its upward progression in energy and luminosity towards the planned High-Luminosity LHC (HL-LHC) in 2025, the challenges of the experiments in processing increasingly ...
in which high computing performance is critical for executing the track reconstruction in the available time. ...
Acknowledgments The authors would like to thank the organizers of the INFIERI Summer School, especially Aurore Savoy-Navarro, for their work in making this excellent school happen. ...
doi:10.1088/1748-0221/2015/9/c09003
fatcat:3bdeuu22kjfx3kziwxbbe5bmsq
HEP Software Foundation Community White Paper Working Group -- Data Organization, Management and Access (DOMA)
[article]
2018
arXiv
pre-print
In this white paper we discuss challenges in DOMA that HEP experiments, such as the HL-LHC, will face as well as potential ways to address them. ...
Without significant changes to data organization, management, and access (DOMA), HEP experiments will find scientific output limited by how fast data can be accessed and digested by computational resources ...
The three main challenges for data in the HL-LHC era can be summarized as: 1. Big Data: The expected data volume will significantly increase in the HL-LHC era. ...
arXiv:1812.00761v1
fatcat:oa4kb76j4nga3ozlosgtfg5udu
Enabling Data Intensive Science on Supercomputers for High Energy Physics R&D Projects in HL-LHC Era
2020
EPJ Web of Conferences
The ATLAS experiment at CERN's Large Hadron Collider uses theWorldwide LHC Computing Grid, the WLCG, for its distributed computing infrastructure. ...
The problem will be even more severe for the next LHC phases. ...
Acknowledgments This work was funded in part by the U. S. Department of Energy, Office of Science, High Energy Physics and Advanced Scientific Computing under Contracts No. ...
doi:10.1051/epjconf/202022601007
fatcat:gzu3plwuu5haxisxrdm76l5ifq
Reconstruction in an imaging calorimeter for HL-LHC
[article]
2020
arXiv
pre-print
Mindful of the projected extreme pressure on computing capacity in the HL-LHC era, the algorithms are being designed with modern parallel architectures in mind. ...
A reconstruction framework is being developed to fully exploit the granularity and other significant features of the detector like precision timing, especially in the high pileup environment of HL-LHC. ...
At the same time TICL is being developed with modern technologies in mind in order to improve to computing performances, both online and offline, within CMSSW in the HL-LHC era. ...
arXiv:2004.10027v2
fatcat:qfs2ifzh4rfajh5cmm33ucto4i
WLCG Strategy towards HL-LHC
2018
Zenodo
It is a top-down prioritisation of the aspects highlighted in the HEP Software Foundation white paper "A Roadmap for HEP Software and Computing R&D for the 2020s" ...
This document summarises the strategy of the WLCG collaboration towards the challenges of HL-LHC. ...
WLCG Strategy towards HL-LHC
Executive Summary The goal of this document is to set out the path towards computing for HL-LHC in 2026/7. ...
doi:10.5281/zenodo.5897017
fatcat:zaamb22se5am5jxtylfgnpvmta
The Scalable Systems Laboratory: a Platform for Software Innovation for HEP
[article]
2020
arXiv
pre-print
testing of service components, and foundational systems R&D for accelerated services developed by the Institute. ...
The core team embeds and partners with other areas in the Institute, and with LHC and other HEP development and operations teams as appropriate, to define investigations and required service deployment ...
'time-to-insight' and maximize the HL-LHC physics potential (AS); and (3) development of data organization, management and access (DOMA) systems for the community's upcoming Exabyte era. ...
arXiv:2005.06151v1
fatcat:ag3udnjemnfihjx3cj3aok4r2q
Strategic Plan for a Scientific Software Innovation Institute (S2I2) for High Energy Physics
[article]
2018
arXiv
pre-print
During the HL-LHC era, the ATLAS and CMS experiments will record circa 10 times as much data from 100 times as many collisions as in LHC Run 1. ...
A commensurate investment in R&D for the software for acquiring, managing, processing and analyzing HL-LHC data will be critical to maximize the return-on-investment in the upgraded accelerator and detectors ...
The CWP provides a roadmap for software R&D in preparation for the HL-LHC and for other HL-LHC era HEP experiments. ...
arXiv:1712.06592v2
fatcat:gm6v2suqj5dkphccrp4bcsymau
Big data analytics for the Future Circular Collider reliability and availability studies
2017
Journal of Physics, Conference Series
The modelling is based on an in-depth study of the CERN injector chain and LHC, and is carried out as a cooperative effort with the HL-LHC project. ...
Responding to the European Strategy for Particle Physics update 2013, the Future Circular Collider study explores scenarios of circular frontier colliders for the post-LHC era. ...
Acknowledgments The authors would like to thank Dániel Stein and Anirudha Bose for their work on CERN HLoader, Zbigniew Baranowski for his expertise on persistency systems and Joeri R. ...
doi:10.1088/1742-6596/898/7/072005
fatcat:leh6y3k6tneb3fvjvkooxyzlbq
The Scalable Systems Laboratory: a Platform for Software Innovation for HEP
2020
EPJ Web of Conferences
testing of service components, and foundational systems R&D for accelerated services developed by the Institute. ...
The core team embeds and partners with other areas in the Institute, and with LHC and other HEP development and operations teams as appropriate, to define investigations and required service deployment ...
'time-to-insight' and maximize the HL-LHC physics potential (AS); and (3) development of data organization, management and access (DOMA) systems for the community's upcoming Exabyte era. ...
doi:10.1051/epjconf/202024505019
fatcat:3iqy6g7cfbg4np5dnj5ruka34a
The CMS Trigger upgrade for the HL-LHC
2019
Zenodo
The current conceptual system design is expected to take full advantage of advances in FPGA and link technologies over the coming years, providing a high-performance, low-latency computing platform for ...
In this presentation we will discuss the ongoing studies and prospects for the online reconstruction and selection algorithms for the high-luminosity era. ...
computing model:
go and see D. ...
doi:10.5281/zenodo.3598771
fatcat:vkzt7zlnnrhfjdywp3okor4t24
HEP computing collaborations for the challenges of the next decade
[article]
2022
arXiv
pre-print
The main computing challenge of the next decade for the LHC experiments is presented by the HL-LHC program. ...
This proposal is in line with the OSG/WLCG strategy for addressing computing for HL-LHC and is aligned with European and other international strategies in computing for large scale science. ...
Figure 1 : 1 Figure 1: the ATLAS (top) and CMS (bottom) projections for the computing (left) and disk storage (right) needs for the HL-LHC. ...
arXiv:2203.07237v1
fatcat:dj4dfpowonbqrh7iqjmk3f2kjy
CTD2020: Fast tracking for the HL-LHC ATLAS detector
2020
Zenodo
This poses a significant challenge for the track reconstruction and its associated computing requirements due to the unprecedented number of particle hits in the tracker system. ...
During the High-Luminosity Phase 2 of LHC, up to 200 simultaneous inelastic proton-proton collisions per bunch crossing are expected. ...
For the HL-LHC era a pile-up regime between 140 and 200 is expected. ...
doi:10.5281/zenodo.4088455
fatcat:ftqhrozg5rbvnihuopvzqzoqa4
CRIC: a unified information system for WLCG and beyond
2019
EPJ Web of Conferences
The contribution describes CRIC architecture, implementation of data model,collectors, user interfaces, advanced authentication and access control components of the system. ...
Following increasing demands of LHC computing needs toward high luminosity era, the experiments are engagdin an ambitious program to extend the capability of WLCG distributed environment, for instance ...
Therefore, the computing system for LHC was designed by integrating resources of the computing centers of the scientific institutions participating in the LHC experiments in a single LHC computing service ...
doi:10.1051/epjconf/201921403003
fatcat:n4zhwpmdlzg5rm4f3gabezlxze
Evolution of ATLAS analysis workflows and tools for the HL-LHC era
2021
EPJ Web of Conferences
The present LHC computing model will not be able to provide the required infrastructure growth, even taking into account the expected evolution in hardware technology. ...
State-of-the-art workflow management technologies and tools to handle these methods within the existing distributed computing system are now being evaluated and developed. ...
In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid and other centres for delivering so effectively the computing infrastructure essential to ...
doi:10.1051/epjconf/202125102002
fatcat:znotpfl5szcmphva5a7eikhyne
« Previous
Showing results 1 — 15 out of 290 results