Consistency models define rules for the apparent order and visibility of updates, and are on a continuum with tradeoffs.
Generally, if control dependencies between instructions and if writes to same location are ordered, then the compiler can reorder as required.
In the following diagram, P means "process" and the global clock's value is represented in the Sequence column.
Its practical relevance is restricted to a thought experiment and formalism, because instantaneous message exchange is impossible.
"[3][4] Adve and Gharachorloo, 1996[5] define two requirements to implement the sequential consistency; program order and write atomicity.
These slow paths can result in sequential inconsistency, because some memories receive the broadcast data faster than others.
Verifying sequential consistency through model checking is undecidable in general, even for finite-state cache coherence protocols.
Thus, a processor under PC can execute a younger load when an older store needs to be stalled.
The Stanford DASH multiprocessor system implements a variation of processor consistency which is incomparable (neither weaker nor stronger) to Goodman's definitions.
Due to its informal definition, there are in fact at least two subtly different implementations,[12] one by Ahamad et al. and one by Mosberger.
Cache consistency[11][14] requires that all write operations to the same memory location are performed in some sequential order.
This exploits the fact that programs written to be executed on a multi-processor system contain the required synchronization to make sure that data races do not occur and SC outcomes are produced always.
This model ensures that write atomicity is always maintained, therefore no additional safety net is required for weak ordering.
For weak ordering models, the programmer must use atomic locking instructions such as test-and-set, fetch-and-op, store conditional, load linked or must label synchronization variables or use fences.
However, under the release consistency model, during the entry to a critical section, termed as "acquire", all operations with respect to the local memory variables need to be completed.
An acquire is effectively a read memory operation used to obtain access to a certain set of shared locations.
Release, on the other hand, is a write operation that is performed for granting permission to access the shared locations.
It also requires the use of acquire and release instructions to explicitly state an entry or exit to a critical section.
In general consistency,[17] all the copies of a memory location are eventually identical after all processes' writes are completed.
Most shared decentralized databases have an eventual consistency model, either BASE: basically available; soft state; eventually consistent, or a combination of ACID and BASE sometimes called SALT: sequential; agreed; ledgered; tamper-resistant, and also symmetric; admin-free; ledgered; and time-consensual.
This is done so that, along with relaxed constraints, the performance increases, but the programmer is responsible for implementing the memory consistency by applying synchronisation techniques and must have a good understanding of the hardware.
To ensure sequential consistency in the above models, safety nets or fences are used to manually enforce the constraint.
On the other hand, the TSO and PC models do not provide safety nets, but the programmers can still use read-modify-write operations to make it appear like the program order is still maintained between a write and a following read.
The ability to pipeline and overlap writes to different locations from the same processor is the key hardware optimisation enabled by PSO.
Tanenbaum et al., 2007[4] defines the consistency model as a contract between the software (processes) and memory implementation (data store).
[4] In distributed systems, maintaining sequential consistency in order to control the concurrent operations is essential.
In some special data stores without simultaneous updates, client-centric consistency models can deal with inconsistencies in a less costly way.
In this approach, a client requests and receives permission from multiple servers in order to read and write a replicated data.
Some other approaches in middleware-based distributed systems apply software-based solutions to provide the cache consistency.
Cache consistency models can differ in their coherence detection strategies that define when inconsistencies occur.