14/07/2020

On the hardness of massively parallel computation

Kai-Min Chung, Kuan-Yi Ho, Xiaorui Sun

Keywords: lower bound of communication rounds, massively parallel computation, random oracle model

Abstract: We investigate whether there are inherent limits of parallelization in the (randomized) massively parallel computation (MPC) model by comparing it with the (sequential) RAM model. As our main result, we show the existence of hard functions that are essentially not parallelizable in the MPC model. Based on the widely-used random oracle methodology in cryptography with a cryptographic hash function h:0,1n → 0,1n computable in time th, we show that there exists a function that can be computed in time O(T · th) and space S by a RAM algorithm, but any MPC algorithm with local memory size s < S/c for some c>1 requires at least  Ω(T) rounds to compute the function, even in the average case, for a wide range of parameters n ≤ S ≤ T ≤ 2n1/4. Our result is almost optimal in the sense that by taking T to be much larger than th,e.g., T to be sub-exponential in th, to compute the function, the round complexity of any MPC algorithm with small local memory size is asymptotically the same (up to a polylogarithmic factor) as the time complexity of the RAM algorithm. Our result is obtained by adapting the so-called compression argument from the data structure lower bounds and cryptography literature to the context of massively parallel computation.

 0
 0
 0
 0
This is an embedded video. Talk and the respective paper are published at SPAA 2020 virtual conference. If you are one of the authors of the paper and want to manage your upload, see the question "My papertalk has been externally embedded..." in the FAQ section.

Comments

Post Comment
no comments yet
code of conduct: tbd Characters remaining: 140

Similar Papers