On the Hardness of Massively Parallel Computation [article]

Kai-Min Chung, Kuan-Yi Ho, Xiaorui Sun
2020 arXiv   pre-print
We investigate whether there are inherent limits of parallelization in the (randomized) massively parallel computation (MPC) model by comparing it with the (sequential) RAM model. As our main result, we show the existence of hard functions that are essentially not parallelizable in the MPC model. Based on the widely-used random oracle methodology in cryptography with a cryptographic hash function h:{0,1}^n →{0,1}^n computable in time t_h, we show that there exists a function that can be
more » ... in time O(T· t_h) and space S by a RAM algorithm, but any MPC algorithm with local memory size s < S/c for some c>1 requires at least Ω̃(T) rounds to compute the function, even in the average case, for a wide range of parameters n ≤ S ≤ T ≤ 2^n^1/4. Our result is almost optimal in the sense that by taking T to be much larger than t_h, e.g., T to be sub-exponential in t_h, to compute the function, the round complexity of any MPC algorithm with small local memory size is asymptotically the same (up to a polylogarithmic factor) as the time complexity of the RAM algorithm. Our result is obtained by adapting the so-called compression argument from the data structure lower bounds and cryptography literature to the context of massively parallel computation.
arXiv:2008.06554v1 fatcat:vacszplacjhtlaaiyu2nc4vl2y