Integrating message-passing and shared-memory

David Kranz, Kirk Johnson, Anant Agarwal, John Kubiatowicz, Beng-Hong Lim
1993 Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming - PPOPP '93  
This paper discusses some of the issues involved in implementing a shared-address space programming model on large-scale, distributed-memory multiprocessors. While such a programming model can be implemented on both shared-memory and messagepassing architectures, we argue that the transparent, coherent caching of global data provided by many shared-memory architectures is of crucial importance. Because message-passing mechanisms are much more efficient than shared-memory loads and stores for
more » ... tain types of interprocessor communication and synchronization operations, however, we argue for building multiprocessors that efficiently support both shared-memory and message-passing mechanisms. We describe an architecture, Alewife, that integrates support for shared-memory and message-passing through a simple interface; we expect the compiler and runtime system to cooperate in using appropriate hardware mechanisms that are most efficient for specific operations. We report on both integrated and exclusively shared-memory implementations of our runtime system and two applications. The integrated runtime system drastically cuts down the cost of communication incurred by the scheduling, load balancing, and certain synchronization operations. We also present preliminary performance results comparing the two systems.
doi:10.1145/155332.155338 dblp:conf/ppopp/KranzJAKL93 fatcat:6tjf3vkdyzgdxdkri4k6pfuw74