Allowing only a single master makes it easier to achieve consistency among the members of the group, but is less flexible than multi-master replication.
It is not required for all domain controllers to replicate with each other as this would cause excessive network traffic in large Active Directory deployments.
The OpenDS/OpenDJ multi-master replication is asynchronous, it uses a log with a publish-subscribe mechanism that allows scaling to a large number of nodes.
[2] Apache CouchDB uses a simple, HTTP-based multi-master replication system built from its use of an append-only data-store and use of Multiversion Concurrency Control (MVCC).
[4] Cloudant, a distributed database system, uses largely the same HTTP API as Apache CouchDB, and exposes the same ability to replicate using Multiversion Concurrency Control (MVCC).
It maintains database consistency across multiple hardware nodes by replicating transactions in a synchronous manner (two-phase commit).
Asynchronous multi-master replication commits data changes to a deferred transaction queue which is periodically processed on all databases in the cluster.
Synchronous multi-master replication uses Oracle's two-phase commit functionality to ensure that all databases with the cluster have a consistent dataset.
There is also an external project, Galera Cluster created by codership Archived 2011-09-27 at the Wayback Machine, that provides true multi-master capability, based on a fork of the InnoDB storage engine and custom replication plug-ins.
Percona XtraDB Cluster also is a combination of Galera replication library and MySQL supporting multi-master.
Various options exist for distributed multi-master, including Bucardo, rubyrep and BDR Bi-Directional Replication.
BDR is aimed at eventual inclusion in PostgreSQL core and has been benchmarked as demonstrating significantly enhanced performance[7] over earlier options.
The latest version BDR 3.6 provides column-level conflict detection, CRDTs, eager replication, multi-node query consistency, and many other features.
It is not required for all Ingres servers in an environment to replicate with each other as this could cause excessive network traffic in large implementations.
In the event of a source, target, or network failure, data integrity is enforced through this two-phase commit protocol by ensuring that either the whole transaction is replicated, or none of it is.