Scalable Task Parallelism for NUMA

Andi Drebes, Antoniu Pop, Karine Heydemann, Albert Cohen, Nathalie Drach
2016 Proceedings of the 2016 International Conference on Parallel Architectures and Compilation - PACT '16  
Dynamic task-parallel programming models are popular on shared-memory systems, promising enhanced scalability, load balancing and locality. These promises, however, are undermined by non-uniform memory access (NUMA). We show that using NUMA-aware task and data placement, it is possible to preserve the uniform hardware abstraction of contemporary task-parallel programming models for both computing and memory resources with high data locality. Our data placement scheme guarantees that all
more » ... to task output data target the local memory of the accessing core. The complementary task placement heuristic improves the locality of accesses to task input data on a best effort basis. Our algorithms take advantage of data-flow style task parallelism, where the privatization of task data enhances scalability by eliminating false dependences and enabling fine-grained dynamic control over data placement. The algorithms are fully automatic, application-independent, performance-portable across NUMA machines, and adapt to dynamic changes. Placement decisions use information about inter-task data dependences readily available in the run-time system, and placement information from the operating system. On a 192-core system with 24 NUMA nodes, our optimizations achieve above 94% locality (fraction of local memory accesses), up to 5× better performance than NUMAaware hierarchical work-stealing, and even 5.6× compared to static interleaved allocation. Finally, we show that stateof-the-art dynamic page migration by the operating system cannot catch up with frequent affinity changes between cores and data and thus fails to accelerate task-parallel applications.
doi:10.1145/2967938.2967946 dblp:conf/IEEEpact/DrebesPH0D16 fatcat:wae2cwmvkvf7jcdsni3537vtj4