Trade-offs between Communication Throughput and Parallel Time

Yishay Mansour, Noam Nisan, Uzi Vishkin
1999 Journal of Complexity  
We study the effect of limited communication throughput on parallel computation in a setting where the number of processors is much smaller than the length of the input. Our model has p processors that communicate through a shared memory of size m. The input has size n and can be read directly by all the processors. We will be primarily interested in studying cases where n> >p> >m. As a test case we study the list reversal problem. For this problem we prove a time lower bound of 0(nÂ-mp). (A
more » ... ilar lower bound holds also for the problems of sorting, finding all unique elements, convolution, and universal hashing.) This result demonstrates that limiting the communication (i.e., small m) could have significant effect on parallel computation. We show an almost matching upper bound of O((nÂ-mp) log O(1) n). The upper bound requires the development of a few interesting techniques which can alleviate the limited communication in some general settings. Article ID jcom.1998.0498, available online at http:ÂÂwww.idealibrary.com on Specifically, we show how to emulate a large shared memory on a limited shared memory efficiently. The lower bound applies even to randomized machines, and the upper bound is a randomized algorithm. We also argue that some standard methodology for designing parallel algorithms appears to require a relatively high level of communication throughput. Our results suggest that new alternative methodologies that need a lower such level must be invented for parallel machines that enable a low level of communication throughput, since otherwise those machines will be severly handicapped as general purpose parallel machines. Although we do not rule that out, we cannot offer any encouraging evidence to suggest that such new methodologies are likely to be found. Academic Press
doi:10.1006/jcom.1998.0498 fatcat:yp4ddm66yrgrfopymotdzto344