Massively parallel communication

In the study of parallel algorithms, the massively parallel communication model or MPC model is a theoretical model of computing, intended as an abstraction for parallel computing systems that use frameworks such as MapReduce, and frequently applied to algorithmic problems in graph theory.

The input to a computational problem in this model is distributed arbitrarily across all of the machines, and is assumed to have a size

, so that each machine can see only a small fraction of the input and some nontrivial level of parallelism will be required to solve the given problem.

[1] With this setup, computation proceeds in a sequence of rounds.

In each round, each machine performs some local computation with the information it has in its memory, and then exchanges messages with other machines.

[1] An initial version of this model was introduced, under the MapReduce name, in a 2010 paper by Howard Karloff, Siddharth Suri, and Sergei Vassilvitskii.

[2][3] The name massively parallel communication was given to this model in a subsequent refinement by Beame, Koutris, and Suciu.

[4] Goodrich et al. provide the following example of an algorithm in this model, for sorting

[3] The resulting algorithm succeeds with high probability in

, matching the amount of work needed for sequential sorting algorithms.

A butterfly network containing many replicated copies of a balanced binary tree (one of which is highlighted in red)