Concurrency control

This is an accepted version of this page In information technology and computer science, especially in the fields of computer programming, operating systems, multiprocessors, and databases, concurrency control ensures that correct results for concurrent operations are generated, while getting those results as quickly as possible.

Introducing concurrency control into a system means applying operation constraints which typically result in some performance reduction.

Operation consistency and correctness should be achieved with as good as possible efficiency, without reducing performance below reasonable levels.

For example, a failure in concurrency control can result in data corruption from torn read or write operations.

Consequently, a vast body of related research has been accumulated since database systems emerged in the early 1970s.

An alternative theory for concurrency control of atomic transactions over abstract data types is presented in (Lynch et al. 1993), and not utilized below.

Thus, without concurrency control such systems can neither provide correct results nor maintain their databases consistently.

If selection and knowledge about trade-offs are available, then category and method should be chosen to provide the highest performance.

The major methods,[1] which have each many variants, and in some cases may overlap or be combined, are: Other major concurrency control types that are utilized in conjunction with the methods above include: The most common mechanism type in database systems since their early days in the 1970s has been Strong strict Two-phase locking (SS2PL; also called Rigorous scheduling or Rigorous 2PL) which is a special case (variant) of Two-phase locking (2PL).

For correctness, a common major goal of most concurrency control mechanisms is generating schedules with the Serializability property.

Concurrency control typically also ensures the Recoverability property of schedules for maintaining correctness in cases of aborted transactions (which can always happen for many reasons).

A commonly utilized special case of recoverability is Strictness, which allows efficient database recovery from failure (but excludes optimistic implementations.

Thus the quite effective utilization of local techniques in such distributed environments is common, e.g., in computer clusters and multi-core processors.

However the local techniques have their limitations and use multi-processes (or threads) supported by multi-processors (or multi-cores) to scale.

The properties of the generated schedules, which are dictated by the concurrency control mechanism, may affect the effectiveness and efficiency of recovery.