In modern parallel computing systems, memory consistency must be maintained to avoid undesirable outcomes.
In programming level, synchronization is applied to clearly schedule a certain memory access in one thread to occur after another.
In general, a distributed shared memory is release consistent if it obeys the following rules:[2] 1.
However, the code in critical section can not be issued prior to the lock acquisition is complete because mutual exclusion may not be guaranteed.
As shown in the code to the right, correctness can be ensured if post operations occur only after all memory access are complete, especially the store to ‘a’.
This case shows when write propagation is performed on a cache-coherent system based on the release consistency model.
But the value of datum is not needed until after the acquire synchronization access in P1 and it can be propagated along with datumIsReady without harming the result of the program.
Consider a system employs a software level shared memory abstraction rather than an actual hardware implementation.
LRC requires performing write propagation in bulk at the release point of synchronization.
Similar to weak ordering, Release consistency allows the compiler to freely reorder loads and stores except that they cannot migrate upward past an acquire synchronization and cannot migrate downward past a release synchronization.
Unlike in weak ordering, synchronization accesses cannot be easily identified by instruction opcodes alone.
Hence, the burden is on programmers’ shoulders to properly identify acquire and release synchronization accesses.