Filters








133 Hits in 4.3 sec

The ALICE detector control system

P. Chochula, L. Jirden, A. Augustinus, G. de Cataldo, C. Torcato, P. Rosinsky, L. Wallet, M. Boccioli, L. Cardoso
2009 2009 16th IEEE-NPSS Real Time Conference  
ALICE is one of the six currently installed experiments at the Large Hadron Collider (LHC) at CERN (Geneva, Switzerland).  ...  The experiment saw its first particles during the commissioning of the LHC accelerator in 2008 and is now preparing for the first physics runs foreseen for the autumn of 2009.  ...  The core of the framework [3] is built as a common effort between the LHC experiments and CERN EN/ICE-SCD section, in the context of a Joint COntrols Project (JCOP) [4] .  ... 
doi:10.1109/rtc.2009.5322159 fatcat:55o4fkj5qrchfbwv5uwiommuny

The ALICE Detector Control System

P. Chochula, L. Jirden, A. Augustinus, G. de Cataldo, C. Torcato, P. Rosinsky, L. Wallet, M. Boccioli, L. Cardoso
2010 IEEE Transactions on Nuclear Science  
ALICE is one of the six currently installed experiments at the Large Hadron Collider (LHC) at CERN (Geneva, Switzerland).  ...  The experiment saw its first particles during the commissioning of the LHC accelerator in 2008 and is now preparing for the first physics runs foreseen for the autumn of 2009.  ...  The core of the framework [3] is built as a common effort between the LHC experiments and CERN EN/ICE-SCD section, in the context of a Joint COntrols Project (JCOP) [4] .  ... 
doi:10.1109/tns.2009.2039944 fatcat:wmq7levxvbbmhhkequlzk5dktq

Data Handling and Communication [chapter]

Frédéric Hemmer, Pier Giorgio Innocenti
2017 Advanced Series on Directions in High Energy Physics  
This chapter traces the evolution of computer usage at CERN, with an emphasis on the impact on experimentation, from SC to LHC [1].  ...  Processing of bubble chamber pictures The arrival of the IBM 709 in 1961 marked a progress in the processing of bubble chamber and spark chamber pictures.  ...  By the end of the 1980s and the start-up of the LEP collider, CERN had acquired in-depth experience in storage management, networking and distributed processing.  ... 
doi:10.1142/9789814749145_0009 fatcat:tz5pptx23feoxoiz6zbjbxm5si

Control of large helium cryogenic systems: a case study on CERN LHC

Marco Pezzetti
2021 EPJ Techniques and Instrumentation  
In that context, the standardization of technical solutions both at hardware and software level simplify also the systems monitoring the operation and maintenance processes, while providing a high level  ...  Since then CERN continued developing the hardware and software components of the cryogenic control system, based on the exploitation of the experience gained.  ...  Acknowledgements The CERN LHC cryogenic control system is a team work and long run experience (more than 20 years!)  ... 
doi:10.1140/epjti/s40485-021-00063-w fatcat:ca3avkapgvc7xnoym3iznw773y

ATLAS Tier-3 within IFIC-Valencia analysis facility

M Villaplana, S González de la Hoz, A Fernández, J Salt, A Lamas, F Fassi, M Kaci, E Oliver, J Sánchez, V Sánchez-Martinez
2012 Journal of Physics, Conference Series  
ATLAS users, 70% of IFIC users, also have the possibility of analysing data with a PROOF farm and storing them locally.  ...  In this contribution we discuss the design of the analysis facility as well as the monitoring tools we use to control and improve its performance.  ...  Acknowledgements We acknowledge the support of MICINN, Spain (Plan Nacional de Física de Partículas FPA2010-21919-C03-01)  ... 
doi:10.1088/1742-6596/396/4/042062 fatcat:btmujyq3l5fihf5lkdxklms5cq

Moving Populations Event Recognition Under Re-Identification and Data Locality Constraints

Wolfgang Maaß, Tom Michels
2015 International Conference on Wirtschaftsinformatik  
obeying non-re-identification and data decentrality requirements.  ...  For more than a decade tracking and tracing physical objects has been target of information systems within the realm of research on the Internet of Things.  ...  For instance, the ATLAS experiment for the Large Hardon Collider (LHC) at CERN, generates 300.000 MByte/s by incident detectors which exceeds storage capabilities.  ... 
dblp:conf/wirtschaftsinformatik/MaassM15 fatcat:7jccxvmicvbepj5ntgz523s46a

Petabyte-scale data migration at CERNBox

Hugo Gonzalez Labrador, Jose Ramon Mendez Reboredo
2019 Zenodo  
This thesis focuses on the design and analysis of the current living system to support a major data migration that will happen in the next months.  ...  The FDO section operates and supports the storage and file system ser- vices for physics. I joined the FDO section as Technical Student in 2014 and currently I am a Staff member of the section.  ...  The LHC experiments produce over 30 petabytes of data per year. Archiving vast quantities of data is an essential function at CERN. Magnetic tapes are used as the main long-term storage medium.  ... 
doi:10.5281/zenodo.3402900 fatcat:i2ywssmct5fpjej4e2uuik5s2e

Distributed Analysis in CMS

Alessandra Fanfani, Anzar Afaq, Jose Afonso Sanches, Julia Andreeva, Giusepppe Bagliesi, Lothar Bauerdick, Stefano Belforte, Patricia Bittencourt Sampaio, Ken Bloom, Barry Blumenfeld, Daniele Bonacorsi, Chris Brew (+67 others)
2010 Journal of Grid Computing  
The CMS experiment expects to manage several Pbytes of data each year during the LHC programme, distributing them over many A.  ...  Summaries of the experience in establishing efficient and scalable operations to get prepared for CMS distributed analysis are presented, followed by the user experience in their current analysis activities  ...  Acknowledgements We thank the technical and administrative staff of CERN and CMS Institutes, Tier-1 and Tier-2 centres and acknowledge their support.  ... 
doi:10.1007/s10723-010-9152-1 fatcat:jvs2ccg7r5em7b3vringhmobye

LEP to an fcc-­‐ee – A Bridge Too Far?

SHIERS JAMIE
2020 Zenodo  
collider (LEP) at CERN.  ...  This paper attempts to précis key documents, presentations and papers that are still available concerning the preparation for, and execution of, (mainly offline) computing for the Large Electron Positron  ...  Acknowledgements The authors of the individual papers and reports summarized in this document deserve all the credit for the hard work and constant innovations that were a key characteristic of computing  ... 
doi:10.5281/zenodo.4139582 fatcat:6h6ckn6nibfnzbqzjx6vmgs6r4

Evolution of the Hadoop Platform and Ecosystem for High Energy Physics

Zbigniew Baranowski, Emil Kleszcz, Prasanth Kothuri, Luca Canali, Riccardo Castellotti, Manuel Martin Marquez, Nuno Guilherme Matos de Barros, Evangelos Motesnitsalis, Piotr Mrowczynski, Jose Carlos Luna Duran, A. Forti, L. Betev (+3 others)
2019 EPJ Web of Conferences  
This paper reports on the overall status of the Hadoop platform and related Hadoop and Spark service at CERN, detailing recent enhancements and features introduced in many areas including the service configuration  ...  The interest in using scalable data processing solutions based on Apache Hadoop ecosystem is constantly growing in the High Energy Physics (HEP) community.  ...  The evolution of the service has profited of the collaboration of Intel in the context of the CERN openlab project and CMS Bigdata project.  ... 
doi:10.1051/epjconf/201921404058 fatcat:i6eogxxrpnhivlaepdnxcuhq2u

The next-generation ARC middleware

O. Appleton, D. Cameron, J. Cernak, P. Dóbé, M. Ellert, T. Frågåt, M. Grønager, D. Johansson, J. Jönemo, J. Kleist, M. Kočan, A. Konstantinov (+22 others)
2010 Annales des télécommunications  
ARC aims at providing general purpose, flexible, collaborative computing environments suitable for a range of uses, both in science and business.  ...  The Advanced Resource Connector (ARC) is a light-weight, non-intrusive, simple yet powerful Grid middleware capable of connecting highly heterogeneous computing and storage resources.  ...  Acknowledgements This work was supported in part by the Information Society and Technologies Activity of the European Commission through the work of the KnowARC project (Contract No.: 032691).  ... 
doi:10.1007/s12243-010-0210-2 fatcat:6est3snla5bz7drbnuzziesi5i

Managing Very-Large Distributed Datasets [chapter]

Miguel Branco, Ed Zaluska, David de Roure, Pedro Salgado, Vincent Garonne, Mario Lassnig, Ricardo Rocha
2008 Lecture Notes in Computer Science  
The motivation for our work is the ATLAS Experiment for the Large Hadron Collider (LHC) at CERN, where the authors are involved in developing the data management middleware.  ...  This middleware, called DQ2, is charged with shipping petabytes of data every month to research centers and universities worldwide and has achieved aggregate throughputs in excess of 1.5 Gbytes/sec over  ...  We would like to acknowledge the many contributions to the design by Torre Wenaus and David Cameron and the help of David and Benjamin Gaidioz in implementing DQ2.  ... 
doi:10.1007/978-3-540-88871-0_54 fatcat:nbx7bqkiajgmnnvwqeeh2liuve

Týr: Blob Storage Meets Built-In Transactions

Pierre Matri, Alexandru Costan, Gabriel Antoniu, Jesus Montes, Maria S. Perez
2016 SC16: International Conference for High Performance Computing, Networking, Storage and Analysis  
Large-scale experiments on Microsoft Azure with a production application from CERN LHC show Týr throughput outperforming state-of-the-art solutions by more than 75%.  ...  Týr offers fine-grained random write access to data and in-place atomic operations.  ...  ALICE (A Large Ion Collider Experiment) [10] is one of the four LHC (Large Hadron Collider) experiments run at CERN (European Organization for Nuclear Research) [13] .  ... 
doi:10.1109/sc.2016.48 dblp:conf/sc/MatriCAMP16 fatcat:qx2vd33pirdo7i2rwu2r3wwvru

Virtual Data in CMS Analysis [article]

A.Arbree, P.Avery, D.Bourilkov, R.Cavanaugh, J.Rodriguez, G.Graham, M.Wilde, Y.Zhao
2003 arXiv   pre-print
We describe a prototype for the analysis of data from the CMS experiment based on the virtual data system Chimera and the object-oriented data analysis framework ROOT.  ...  ratio, varying the estimation of parameters, etc.; - by facilitating the audit of an analysis and the reproduction of its results by a different group, or in a peer review context.  ...  Acknowledgments This work is supported in part by the United States National Science Foundation under grants NSF ITR-0086044 (GriPhyN) and NSF PHY-0122557 (iVDGL).  ... 
arXiv:physics/0306008v2 fatcat:2i6foxls5bfdzlpmjwiyci6kiq

Book of Abstracts: Cloud Services for Synchronisation and Sharing -- CS3 Workshop -- INFN Copenhagen January 2020

Chan, Belinda; Aben, Guido; Bech, Martin; Dutka, Łukasz; Farina, Fabio; Lamanna, Massimo; Mościcki, Jakub Tomasz; Orellana, Frederik; Steiger, Tilo; Stalio Stefano; Trompert, Ron
2020 Zenodo  
Book of Abstracts: Cloud Services for Synchronization and Sharing 2020  ...  In this presentation we will give a summary of present experience and future evolution of SWAN both at CERN and in a larger context.  ...  Oracle and CERN have been collaborating for more than 15 years in the context of the CERN Openlab initiative in order assess public cloud solutions.  ... 
doi:10.5281/zenodo.3601215 fatcat:5wwyjoc4dvhr5pgqmq3qqxn5ci
« Previous Showing results 1 — 15 out of 133 results