This is a property of a system—whether a program, computer, or a network—where there is a separate execution point or "thread of control" for each process.
[a] By contrast, concurrent computing consists of process lifetimes overlapping, but execution does not happen at the same instant.
The goal here is to model processes that happen concurrently, like multiple clients accessing a server at the same time.
For example, given two tasks, T1 and T2:[citation needed] The word "sequential" is used as an antonym for both "concurrent" and "parallel"; when these are explicitly distinguished, concurrent/sequential and parallel/serial are used as opposing pairs.
[citation needed] The main challenge in designing concurrent programs is concurrency control: ensuring the correct sequencing of the interactions or communications between different computational executions, and coordinating access to resources that are shared among executions.
Logics such as Lamport's TLA+, and mathematical models such as traces and Actor event diagrams, have also been developed to describe the behavior of concurrent systems.
The consistency model defines rules for how operations on computer memory occur and how results are produced.
Explicit communication can be divided into two classes: Shared memory and message passing concurrency have different performance characteristics.
Concurrent computing developed out of earlier work on railroads and telegraphy, from the 19th and early 20th century, and some terms date to this period, such as semaphores.
The academic study of concurrent algorithms started in the 1960s, with Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion.
[10] Concurrency is pervasive in computing, occurring from low-level hardware on a single chip to worldwide networks.