denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors in
to be interpreted as simplices of probability vectors,[3] whose compactness implies that the minima and maxima in these formulas exist.
[5] However, the version of Yao's principle using an equality rather than an inequality can also be useful when proving lower bounds on randomized algorithms.
The equality implies that there is no loss of generality in proving lower bounds in this way: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.
For these problems, for any fixed number of elements, an input can be expressed as a permutation and a deterministic algorithm can be expressed as a decision tree, both finite in number as Yao's principle requires.
Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.
[2] Subsequently, to Yao's work, Walter Cunto and Ian Munro showed that, for random permutations, any deterministic algorithm must perform at least
[9] By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input.
of edges, a randomized algorithm must probe a quadratic number of pairs of vertices.
Yao also used this method to show that quadratically many queries are needed for the properties of containing a given tree or clique as a subgraph, of containing a perfect matching, and of containing a Hamiltonian cycle, for small enough constant error probabilities.
Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems".
In this case, Yao's principle describes an equality between the average-case complexity of deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.
[6][14] An example described by Avi Wigderson (based on a paper by Manu Viola) is the communication complexity for two parties, each holding
bits of communication is possible, easily achieved by one party sending their whole input to the other.
However, parties with a shared source of randomness and a fixed error probability can exchange 1-bit hash functions of prefixes of the input to perform a noisy binary search for the first position where their inputs differ, achieving
[6][15] Yao's principle has also been applied to the competitive ratio of online algorithms.
Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as the reciprocal of an expected value.
As an instance of the coupon collector's problem, the expected requests per phase is
page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at least
may be interpreted as a randomized choice among deterministic algorithms, and thus as a mixed strategy for Alice.
Similarly, a non-random algorithm may be thought of as a pure strategy for Alice.
By the minimax theorem of John von Neumann, there exists a game value
[2] The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player.
[3] However, although linear programs may be solved in polynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly.
Therefore, formulating and solving these programs to find these optimal strategies is often impractical.
However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle.
Ben-David and Blais show that, for Boolean functions under many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.
It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in a weak sense that the inputs for which this is true have probability
, or in a strong sense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs.
For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability