Collective operations are building blocks for interaction patterns, that are often used in SPMD algorithms in the parallel programming context.
A realization of the collective operations is provided by the Message Passing Interface[1] (MPI).
The broadcast pattern[3] is used to distribute data from one processing unit to all processing units, which is often needed in SPMD parallel programs to dispense input or global values.
One possibility is to utilize a binomial tree structure with the requirement that
The packets are then broadcast one after another, so that data is distributed fast in the communication network.
Pipelined broadcast on balanced binary tree is possible in
The reduce pattern[4] is used to collect data or partial results from different processing units and to combine them into a global result by a chosen operator.
Some algorithms require a commutative operator with a neutral element.
For pipelining on binary trees the message must be representable as a vector of smaller object for component-wise reduction.
Pipelined reduce on a balanced binary tree is possible in
For long messages a corresponding implementation is suitable, whereas for short messages, the latency can be reduced by using a hypercube (Hypercube (communication pattern) § All-Gather/ All-Reduce) topology, if
All-reduce can also be implemented with a butterfly algorithm and achieve optimal latency and bandwidth.
All-reduce implemented with a butterfly algorithm achieves the same asymptotic runtime.
The prefix-sum or scan operation[7] is used to collect data or partial results from different processing units and to compute intermediate results by an operator, which are stored on those processing units.
must be at least associative, whereas some algorithms require also a commutative operator and a neutral element.
In the case of the so-called exclusive prefix sum, processing unit
For long messages, the hypercube (Hypercube (communication pattern) § Prefix sum, Prefix sum § Distributed memory: Hypercube algorithm) topology is not suitable, since all processing units are active in every step and therefore pipelining can't be used.
Prefix-sum on a binary tree can be implemented with an upward and downward phase.
In the upward phase reduction is performed, while the downward phase is similar to broadcast, where the prefix sums are computed by sending different data to the left and right children.
Pipelined prefix sum on a binary tree is possible in
Barrier is thus used to achieve global synchronization in distributed computing.
Compare this to reduce where message size is a constant for operators like
It differs from broadcast, in that it does not send the same message to all processing units.
Instead it splits the message and delivers one part of it to each processing unit.
Assuming we have a fully connected network, the best possible runtime for all-to-all is in
.This table[12] gives an overview over the best known asymptotic runtimes, assuming we have free choice of network topology.
For each operation, the optimal algorithm can depend on the input sizes
For example, broadcast for short messages is best implemented using a binomial tree whereas for long messages a pipelined communication on a balanced binary tree is optimal.
Sanders, Peter; Mehlhorn, Kurt; Dietzfelbinger, Martin; Dementiev, Roman (2019).
Sequential and Parallel Algorithms and Data Structures - The Basic Toolbox.