Experience of the WLCG data management system from the first two years of the LHC data taking

Dagmar Adamova
2012 Proceedings of 50th International Winter Meeting on Nuclear Physics — PoS(Bormio2012)   unpublished
The High Energy Physics is one of the research areas where the accomplishment of scientific results is inconceivable without the distributed computing. The experiments at the Large Hadron Collider (LHC) at CERN are facing the challenge to record, process and give access to tens of PetaBytes (PB) of data produced during the proton-proton and heavy ion collisions at the LHC (15PetaBytes/year of raw data alone). To accomplish this task and to enable early delivery of physics results, the LHC
more » ... ments are using the Worldwide LHC Computing Grid (WLCG) infrastructure of distributed computational and storage facilities provided by more than 140 centers. Since the first centers joined WLCG in 2002, the infrastructure had been gradually built up, upgraded and stress-tested. As a result, when the LHC started delivering beams in late 2009, the WLCG was fully ready and capable to store, process and allow the physicists to analyze this data. The architecture of WLCG follows a hierarchical system of sites classified according to a "Tier" taxonomy. There is 1 Tier-0 (CERN), 11 Tier-1s and about 130 Tier-2s spread over 5 continents. We will briefly summarize the experience and performance of the WLCG distributed data management system during the first two years of data taking. We will demonstrate the irreplaceable role of the WLCG Tier-2 sites as concerns providing the compute and storage resources and operation services. As an example, we will present the contribution of the WLCG Tier-2 site in Prague, Czech Republic to the overall WLCG Tier-2s operations.
doi:10.22323/1.160.0014 fatcat:pmkgem5q5ndwvh34oqfltrofgu