A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit <a rel="external noopener" href="https://zenodo.org/record/3599461/files/CHEP2019_434.pdf">the original URL</a>. The file type is <code>application/pdf</code>.
The ALICE experiment at the Large Hadron Collider (LHC) at CERN will deploy a combined online-offline facility for detector readout and reconstruction, as well as data compression. This system is designed to allow the inspection of all collisions at rates of 50 kHz in the case of Pb-Pb and 400 kHz for pp collisions in order to give access to rare physics signals. The input data rate of up to of 3.4 TByte/s requires that a large part of the detector reconstruction will be realized online in the<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.3599461">doi:10.5281/zenodo.3599461</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/bantuq6tdvat7gt7rwtpyprczu">fatcat:bantuq6tdvat7gt7rwtpyprczu</a> </span>
more »... ynchronous stage of the system. The data processing is based on individual algorithms which will be executed in parallel processes on multiple compute nodes. Data and workload will be distributed among the nodes and processes using message queue communication provided by the FairMQ package of the ALFA software framework. As the ALICE specific layer, a message-passing aware data model and annotation allows to efficiently describe data and routing. Finally, the Data Processing Layer introduces the description of the reconstruction in a data-flow oriented approach, and makes the complicated nature of a distributed system transparent to users and developers. So-called workflows are defined in a declarative language as sequences of processes with inputs, the algorithm, and outputs as the three descriptive properties. With this layered structure of the ALICE software, development of specific stages of the reconstruction can be done in a flexible way in the domain of the specified processes without the need of boiler-plate adjustments and taking into account details of the distributed and parallel system. The Data Processing Layer framework takes care of generating the workflow with the required connections and synchronization, and interfaces to the backend deploying the workflow on computing resources. For the development it is completely transparent whether to run a workflow on a laptop or a computer cluster. The modular software framework is the basis for splitting the data processing into manageable pieces and helps to distri [...]
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200208085325/https://zenodo.org/record/3599461/files/CHEP2019_434.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/af/0e/af0eeca1f3e55655fa153a6927bf932c4f41f14f.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.5281/zenodo.3599461"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> zenodo.org </button> </a>