Using relaxed concurrent data structures for contention minimization in multithreaded MPI programs

Andrey V Tabakov, Alexey A Paznikov
2019 Journal of Physics, Conference Series  
Parallel computing is one of the top priorities in computer science. The main means of parallel processing information is a distributed computing system (CS) -a composition of elementary machines that interact through a communication medium. Modern distributed VSs implement thread-level parallelism (TLP) within a single computing node (multi-core CS with shared memory), as well as process-level parallelism (PLP) process-level parallelism for the entire distributed CS. The main tool for
more » ... g parallel programs for such systems is the MPI standard. The need to create scalable parallel programs that effectively use compute nodes with shared memory has determined the development of the MPI standard, which today supports the creation of hybrid multi-threaded MPI programs. A hybrid multi-threaded MPI program is the combination of the computational capabilities of processes and threads. The standard defines four types of multithreading: Single -one thread of execution; Funneled -a multi-threaded program, but only main thread can perform MPI operations; Serialized -only one thread at the exact same time can make a call to MPI functions; Multiple -each program flow can perform MPI functions at any time. The main task of the multiple mode is the need to synchronize the communication flows within each process. This paper presents an overview of the work that addresses the problem of synchronizing processes running on remote machines and synchronizing internal program threads. Method for synchronization of threads based on queues with weakened semantics of operations is proposed.
doi:10.1088/1742-6596/1399/3/033037 fatcat:xjnqe6mlcnd47gc2dvvcdt7gw4