The protocol estimates the average error rates by implementing long sequences of randomly sampled quantum gate operations.
The original theory of randomized benchmarking, proposed by Joseph Emerson and collaborators,[1] considered the implementation of sequences of Haar-random operations, but this had several practical limitations.
In contrast, randomized benchmarking protocols are the only known approaches to error characterization that scale efficiently as number of qubits in the system increases.
If the set of operations satisfies certain mathematical properties,[1][4][7][16][10][11][12] such as comprising a sequence of twirls [5][17] with unitary two-designs,[4] then the measured decay can be shown to be an invariant exponential with a rate fixed uniquely by features of the error model.
Emerson, Alicki and Zyczkowski also showed, under the assumption of gate-independent errors, that the measured decay rate is directly related to an important figure of merit, the average gate fidelity and independent of the choice of initial state and any errors in the initial state, as well as the specific random sequences of quantum gates.
In 2011 and 2012, Magesan et al.[7][8] proved that the exponential decay rate is fully robust to arbitrary state preparation and measurement errors (SPAM).