A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2019; you can also visit the original URL.
The file type is application/pdf
.
Filters
CMS use of allocation based HPC resources
2017
Journal of Physics, Conference Series
The higher energy and luminosity from the LHC in Run2 has put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) in Run3 and beyond, it becomes clear the current model of CMS computing alone will not scale accordingly. High Performance Computing (HPC) facilities, widely used in scientific computing outside of HEP, present a (at least so far) largely untapped computing resource for CMS. Even just being
doi:10.1088/1742-6596/898/9/092050
fatcat:53utbgx5fvdangnfl7jpinicva
more »
... le use a small fraction of HPC facilities computing resources could significantly increase the overall computing available to CMS. Here we describe the CMS strategy for integrating HPC resources into CMS computing, the unique challenges these facilities present, and how we plan on overcoming these challenges. We also present the current status of ongoing CMS efforts at HPC sites such as NERSC (Cori cluster), SDSC (Comet cluster) and TACC (Stampede cluster).
CMS Tier-0: Preparing for the future
2012
Journal of Physics, Conference Series
The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It is responsible for the first processing steps of data from the CMS Experiment at CERN. This talk covers the complete overhaul (rewrite) of the system for the 2012 run, to bring it into line with the new CMS Workload Management system, improving scalability and maintainability for the next few years. Abstract. The Tier-0 processing system is the initial stage of the multi-tiered computing system of
doi:10.1088/1742-6596/396/2/022025
fatcat:zozeqhpzsnhknlzh6qiruwpu4i
more »
... MS. It is responsible for the first processing steps of data from the CMS Experiment at CERN. This presentation covers the complete overhaul (rewrite) of the system for the 2012 run, to bring it into line with the new CMS Workload Management system, improving scalability and maintainability for the next few years.
The architecture and operation of the CMS Tier-0
2011
Journal of Physics, Conference Series
The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It takes care of the first processing steps of data at the LHC at CERN. The automated workflows running in the Tier-0 contain both low-latency processing chains for time-critical applications and bulk chains to archive the recorded data offsite the host laboratory. It is a mix between an online and offline system, because the data the CMS DAQ writes out initially is of a temporary nature. Most of the
doi:10.1088/1742-6596/331/3/032017
fatcat:4o5c7wvhv5cwhd7x7rgmcuaouq
more »
... omplexity in the design of this system comes from this unique combination of online and offline use cases and dependencies. In this talk, we want to present the software design of the CMS Tier-0 system and present an analysis of the 24/7 operation of the system in the 2009/2010 data taking periods. Abstract. The Tier-0 processing system is the initial stage of the multi-tiered computing system of CMS. It takes care of the first processing steps of data at the LHC at CERN. The automated workflows running in the Tier-0 contain both low-latency processing chains for timecritical applications and bulk chains to archive the recorded data offsite the host laboratory. It is a mix between an online and offline system, because the data the CMS DAQ writes out initially is of a temporary nature. Most of the complexity in the design of this system comes from this unique combination of online and offline use cases and dependencies. In this talk, we want to present the software design of the CMS Tier-0 system and present an analysis of the 24/7 operation of the system in the 2009/2010 data taking periods. • Physics stream, about 200Hz • Express stream, about 20Hz, a subset of Physics for fast monitoring/analysis and feedback • Various calibration/monitoring streams, about 20Hz total
The CMS TierO goes Cloud and Grid for LHC Run 2
2015
Journal of Physics, Conference Series
In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1
doi:10.1088/1742-6596/664/3/032014
fatcat:ro3njqysq5d5xjq5zmwapno4hu
more »
... id resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.
HPC resource integration into CMS Computing via HEPCloud
2019
EPJ Web of Conferences
The higher energy and luminosity from the LHC in Run 2 have put increased pressure on CMS computing resources. Extrapolating to even higher luminosities (and thus higher event complexities and trigger rates) beyond Run 3, it becomes clear that simply scaling up the the current model of CMS computing alone will become economically unfeasible. High Performance Computing (HPC) facilities, widely used in scientific computing outside of HEP, have the potential to help fill the gap. Here we describe
doi:10.1051/epjconf/201921403031
fatcat:4bvpm72zebaxzo2kzpq44lilqe
more »
... he U.S.CMS efforts to integrate US HPC resources into CMS Computing via the HEPCloud project at Fermilab. We present advancements in our ability to use NERSC resources at scale and efforts to integrate other HPC sites as well. We present experience in the elastic use of HPC resources, quickly scaling up use when so required by CMS workflows. We also present performance studies of the CMS multi-threaded framework on both Haswell and KNL HPC resources.
Opportunistic Resource Usage in CMS
2014
Journal of Physics, Conference Series
CMS computing model evolution C Grandi, D Bonacorsi, D Colling et al. -CMS computing operations during run 1 J Adelman, S Alderweireldt, J Artieda et al. -Abstract. CMS is using a tiered setup of dedicated computing resources provided by sites distributed over the world and organized in WLCG. These sites pledge resources to CMS and are preparing them especially for CMS to run the experiment's applications. But there are more resources available opportunistically both on the GRID and in local
doi:10.1088/1742-6596/513/6/062028
fatcat:hr23b2vmvvc4zfrlyijxdrpqhu
more »
... versity and research clusters which can be used for CMS applications. We will present CMS' strategy to use opportunistic resources and prepare them dynamically to run CMS applications. CMS is able to run its applications on resources that can be reached through the GRID, through EC2 compliant cloud interfaces. Even resources that can be used through ssh login nodes can be harnessed. All of these usage modes are integrated transparently into the GlideIn WMS submission infrastructure, which is the basis of CMS' opportunistic resource usage strategy. Technologies like Parrot to mount the software distribution via CVMFS and xrootd for access to data and simulation samples via the WAN are used and will be described. We will summarize the experience with opportunistic resource usage and give an outlook for the restart of LHC data taking in 2015.
Practice and consensus-based strategies in diagnosing and managing systemic juvenile idiopathic arthritis in Germany
2018
Pediatric Rheumatology Online Journal
Hufnagel has received research support from Novartis and Roche. Dr. ...
doi:10.1186/s12969-018-0224-2
pmid:29357887
pmcid:PMC5778670
fatcat:ik3q7lkhgje7bn3asfc4h3y7de
S100 Levels Guided Treatment Options in a Teenage Boy with Relapsing Pericarditis or Atypical Still and #8217;s Disease
2012
Annals of the Pediatric Rheumatology (APR)
those in com sible only w Measu of units (SI u Footno script lowerc 4. Ref Please appear in the in numerica than 6 autho name of the abbreviated For ch chapter auth name, year o The lis accepted for reference lis mitted when the Digital O Refere Journa of new diagn diatr Rheum Abstract Unraveling the etiology of recurrent pericarditis in children is often challenging. Here, we describe the case of a previously healthy, 15-year-old boy who presented with fever, cough and thoracic pain due to
doi:10.5455/apr.112620120045
fatcat:twbsmcflk5dgvibej2noiulmaa
more »
... carditis with a large pericardial effusion. After insertion of a pericardial drainage over 1600 ml of pericardial fluid were collected over a 48-hour period. Laboratory investigations revealed systemic inflammation and negative autoantibodies, but displayed no evidence of an infectious disease. Autoimmune pericarditis was therefore suspected and a combination of glucocorticosteroid and NSAID therapy was initiated. Steroids were able to be gradually tapered. However, the patient suffered a recurrence of pericarditis after six weeks of low-dose steroids and NSAIDs. As with the first episode, arthritis, rash, lymphadenopathy and hepatosplenomegaly were absent. Retrospective analysis of the serum from the initial episode showed significant elevation of S100 protein levels (S100A8/S100A9). As a result, we assumed an atypical presentation of Still´s disease (systemic onset juvenile idiopathic arthritis or SoJIA) and initiated therapy with anakinra, an interleukin-1-receptor antagonist. This treatment led to rapid improvement, and pericardiocentesis was able to be avoided. Four months after the relapse and subsequent initiation of anakinra therapy, no disease reoccurrence has transpired. Treatment with ibuprofen and steroids could be suspended. The patient developed mild arthritic signs during follow-up supporting the diagnosis of SoJIA. A potential differential diagnosis in this case is idiopathic recurrent acute pericarditis (IRAP), a disease of unknown etiology that shows features consistent with an autoinflammatory pathogenesis where recent reports indicated good clinical response to anakinra treatment as well. This case highlights the predictive value of S100 protein levels for the diagnosis of autoinflammatory disorders such as SoJIA -particularly when there is an atypical or oligosymptomatic presentation. Moreover, it illustrates the therapeutic potential of anakinra for autoimmune-mediated pericarditis.
The Diverse use of Clouds by CMS
2015
Journal of Physics, Conference Series
The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the
doi:10.1088/1742-6596/664/2/022012
fatcat:z6hhawosqfg63hdc2ruo6odm7m
more »
... CMS Tier 1 sites, when it is not running as part of the trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources. We present a snap shot of this infrastructure and its operation at the time of the CHEP2015 conference.
CMS Workflow Execution Using Intelligent Job Scheduling and Data Access Strategies
2011
IEEE Transactions on Nuclear Science
Complex scientific workflows can process large amounts of data using thousands of tasks. The turnaround times of these workflows are often affected by various latencies such as the resource discovery, scheduling and data access latencies for the individual workflow processes or actors. Minimizing these latencies will improve the overall execution time of a workflow and thus lead to a more efficient and robust processing environment. In this paper, we propose a pilot job based infrastructure
doi:10.1109/tns.2011.2146276
fatcat:sax2bk5jabh77f2bbug73paw34
more »
... has intelligent data reuse and job execution strategies to minimize the scheduling, queuing, execution and data access latencies. The results have shown that significant improvements in the overall turnaround time of a workflow can be achieved with this approach. The proposed approach has been evaluated, first using the CMS Tier0 data processing workflow, and then simulating the workflows to evaluate its effectiveness in a controlled environment.
HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation
2017
Computing and Software for Big Science
Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to
doi:10.1007/s41781-017-0001-9
fatcat:o3ca6mtu5bbdbkck7xy7u6enp4
more »
... onstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.
Titelseiten
2020
Beiträge zur Geschichte der Deutschen Sprache und Literatur
AA 91 und Cpg 339
113
Jan-Dirk Müller
Christian Buhr: Zweifel an der Liebe. ...
Katharina Böhnert: Sprachgeschichte
107
Christoph Fasbender
Nadine Hufnagel: Verwandtschaft im Reinhart Fuchs. ...
doi:10.1515/bgsl-2020-frontmatter1
fatcat:fb2myciqxvalffzwinni7ljmeq
SAT0502 LONG-TERM OBSERVATIONAL SAFETY SURVEILLANCE OF GOLIMUMAB TREATMENT FOR POLYARTICULAR JUVENILE IDIOPATHIC ARTHIRTIS—AN INTERIM ANALYSIS
2020
Annals of the Rheumatic Diseases
: None declared, Dirk Foell Grant/research support from: Novartis, Sobi, Pfizer, Speakers bureau: Novartis, Sobi, Normi Brueck: None declared, Prasad Oommen Consultant of: Novartis, Frank Dressler: None ...
Hofmann: None declared, Hans Koessel: None declared, Ivan Foeldvari Consultant of: Novartis, Sonja Mrusek: None declared, Daniel Windschall Speakers bureau: Abbvie, Nils Onken: None declared, Markus Hufnagel ...
Ivan Foeldvari Consultant of: Novartis, Sonja Mrusek: None declared, Daniel Windschall Speakers bureau: Abbvie, Nils Onken: None declared, Markus Hufnagel: None declared, Dirk Foell Grant/research support ...
doi:10.1136/annrheumdis-2020-eular.3589
fatcat:xe5l7lrgojab3jrwizhhd5hqq4
Inhaltsverzeichnis
2002
Nietzscheforschung
Zarathustra -Sisyphos
Zur Nietzsche-Rezeption Albert Camus'
247
Hans-Gerd von Seggern (Berlin)
Allen Tinten-Fischen feind
Metaphern der Melancholie in Nietzsches Also sprach Zarathustra
263
Dirk ...
Carl Ludwig Nietzsche
und Friedrich Nietzsche
131
Johann Figi (Wien)
"Dionysos und der Gekreuzigte"
Nietzsches Identifikation und Konfrontation
mit zentralen religiösen ,Figuren'
147
Erwin Hufnagel ...
doi:10.1524/nifo.2002.9.jg.5
fatcat:ggwtcuvl4rdvjgyccoomm7v544
Did you know?
1995
Nature Medicine
Valiathan had worked with Charles Hufnagel, at Georgetown University Medical Center in Washington, DC. Hufnagel (now deceased) is the person who in 1951 developed the first mechanical valve. ...
This year's prizewinners are Dirk Bootsma and jan H. J. Hoeijmakers, professors at the Erasmus University of Rotterdam, The Netherlands, noted for their studies of the DNA repair system; Peter N. ...
doi:10.1038/nm0295-109
fatcat:4okcxn7ye5b3zbsaffr4cpk4pe
« Previous
Showing results 1 — 15 out of 180 results