Gaudi components for concurrency: Concurrency for existing and future experiments
Journal of Physics, Conference Series
Modeling of dialogue regimes of distance robot control E V Larkin and A N Privalov -Performance of particle in cell methods on highly concurrent computational architectures M F Adams, S Ethier and N Wichmann -Recent citations A Roadmap for HEP Software and Computing R&D for the 2020s Johannes Albrecht et al -Gaudi Evolution for Future Challenges M Clemencic et al -I. Shapoval et al -This content was downloaded from IP address 126.96.36.199 Abstract. HEP experiments produce enormous data sets
... an ever-growing rate. To cope with the challenge posed by these data sets, experiments' software needs to embrace all capabilities modern CPUs offer. With decreasing memory /core ratio, the one-process-per-core approach of recent years becomes less feasible. Instead, multi-threading with fine-grained parallelism needs to be exploited to benefit from memory sharing among threads. Gaudi is an experiment-independent data processing framework, used for instance by the ATLAS and LHCb experiments at CERN's Large Hadron Collider. It has originally been designed with only sequential processing in mind. In a recent effort, the framework has been extended to allow for multi-threaded processing. This includes components for concurrent scheduling of several algorithms -either processing the same or multiple events, thread-safe data store access and resource management. In the sequential case, the relationships between algorithms are encoded implicitly in their pre-determined execution order. For parallel processing, these relationships need to be expressed explicitly, in order for the scheduler to be able to exploit maximum parallelism while respecting dependencies between algorithms. Therefore, means to express and automatically track these dependencies need to be provided by the framework. In this paper, we present components introduced to express and track dependencies of algorithms to deduce a precedence-constrained directed acyclic graph, which serves as basis for our algorithmically sophisticated scheduling approach for tasks with dynamic priorities. We introduce an incremental migration path for existing experiments towards parallel processing and highlight the benefits of explicit dependencies even in the sequential case, such as sanity checks and sequence optimization by graph analysis.