Parallel programming model

[2] Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.

Conventional multi-core processors directly support shared memory, which many parallel programming languages and libraries, such as Cilk, OpenMP and Threading Building Blocks, are designed to exploit.

In contrast, the actor model uses asynchronous message passing and has been employed in the design of languages such as D, Scala and SALSA.

Partitioned Global Address Space (PGAS) models provide a middle ground between shared memory and message passing.

PGAS provides a global memory address space abstraction that is logically partitioned, where a portion is local to each process.

Parallel processes communicate by asynchronously performing operations (e.g. reads and writes) on the global address space, in a manner reminiscent of shared memory models.

However by semantically partitioning the global address space into portions with affinity to a particular processes, they allow programmers to exploit locality of reference and enable efficient implementation on distributed memory parallel computers.

A data-parallel model focuses on performing operations on a data set, typically a regularly structured array.