Linearizability

Linearizability is a strong correctness condition, which constrains what outputs are possible when an object is accessed by multiple processes concurrently.

An atomic object can be understood immediately and completely from its sequential definition, as a set of operations run in parallel which always appear to occur one after the other; no inconsistencies may emerge.

A concurrent system consists of a collection of processes communicating through shared data structures or objects.

Linearizability is important in these concurrent systems where objects may be accessed by multiple processes at the same time and a programmer needs to be able to reason about the expected results.

This property of occurring instantaneously, or indivisibly, leads to the use of the term atomic as an alternative to the longer "linearizable".

The ability to temporarily inhibit interrupts, ensuring that the currently running process cannot be context switched, also suffices on a uniprocessor.

Further, in the presence of an instruction pipeline, uninterruptible operations present a security risk, as they can potentially be chained in an infinite loop to create a denial of service attack, as in the Cyrix coma bug.

Compilers use the hardware features or more complex methods to implement the operations; an example is libatomic of GCC.

In the case of 64-bit ARMv8-A architecture, it provides LDXR and STXR instructions for byte, half-word, word, and double-word size.

[5] The easiest way to achieve linearizability is running groups of primitive operations in a critical section.

Strictly, independent operations can then be carefully permitted to overlap their critical sections, provided this does not violate linearizability.

Another approach, favoured by researchers (but not yet widely used in the software industry), is to design a linearizable object using the native atomic primitives provided by the hardware.

This has the potential to maximise available parallelism and minimise synchronisation costs, but requires mathematical proofs which show that the objects behave correctly.

A common theme when designing linearizable objects is to provide an all-or-nothing interface: either an operation succeeds completely, or it fails and does nothing.

For example: To demonstrate the power and necessity of linearizability we will consider a simple counter which different processes can increment.

The above example shows the need for carefully thinking through implementations of data structures and how linearizability can have an effect on the correctness of the system.

Furthermore, the specific order in which the processes run can change the results, making such an error difficult to detect, reproduce and debug.

[6] Another approach is to turn the naive algorithm into a critical section, preventing other threads from disrupting it, using a lock.

Once again fixing the non-atomic counter algorithm: This strategy works as expected; the lock prevents other threads from updating the value until it is released.

However, when compared with direct use of atomic operations, it can suffer from significant overhead due to lock contention.

To improve program performance, it may therefore be a good idea to replace simple critical sections with atomic operations for non-blocking synchronization (as we have just done for the counter with compare-and-swap and fetch-and-increment), instead of the other way around, but unfortunately a significant improvement is not guaranteed and lock-free algorithms can easily become too complicated to be worth the effort.

In grey a linear sub-history, processes beginning in b do not have a linearizable history because b0 or b1 may complete in either order before b2 occurs.