Automatic Methods for Hiding Latency in Parallel and Distributed Computation

Matthew Andrews, Tom Leighton, P. Takis Metaxas, Lisa Zhang
1999 SIAM journal on computing (Print)  
In this paper we describe methods for mitigating the degradation in performance caused by high latencies in parallel and distributed networks. For example, given any "dataflow" type of algorithm that runs in T steps on an n-node ring with unit link delays, we show how to run the algorithm in O(T ) steps on any n-node bounded-degree connected network with average link delay O(1). This is a significant improvement over prior approaches to latency hiding, which require slowdowns proportional to
more » ... maximum link delay. In the case when the network has average link delay dave, our simulation runs in O( √ daveT ) steps using n/ √ dave processors, thereby preserving efficiency. We also show how to efficiently simulate an n × n array with unit link delays using slowdownÕ(d 2/3 ave ) on a two-dimensional array with average link delay dave. Last, we present results for the case in which large local databases are involved in the computation.
doi:10.1137/s0097539797326502 fatcat:dluxkbnzhjbjvb74tkmjbgucba