Typically, these tests have a small number of outcomes (such as a yes–no question) and can be performed quickly (say, with unit computational cost), so the worst-case time complexity of an algorithm in the decision tree model corresponds to the depth of the corresponding tree.
Decision tree models are instrumental in establishing lower bounds for the complexity of certain classes of computational problems and algorithms.
Decision trees are often employed to understand algorithms for sorting and other similar problems; this was first done by Ford and Johnson.
Assuming that the items to be sorted are all distinct and comparable, this can be rephrased as a yes-or-no question: is
that describes how the input sequence was scrambled from the fully ordered list of items.
comparisons through a simple argument: for an algorithm to be correct, it must be able to output every possible permutation of
, so this is a lower bound on the run time of a comparison sorting algorithm.
In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such as mergesort and heapsort, demonstrates that the bound is tight.
[2]: 91 This argument does not use anything about the type of query, so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree.
In essence, this is a rephrasing of the information-theoretic argument that a correct sorting algorithm must learn at least
A similar argument works for general lower bounds for computing order statistics.
whose fibers can be constructed by taking unions and intersections of half-spaces.
Geometrically, the space is divided into semi-algebraic sets (a generalization of hyperplane).
These decision tree models, defined by Rabin[3] and Reingold,[4] are often used for proving lower bounds in computational geometry.
[5] For example, Ben-Or showed that element uniqueness (the task of computing
The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained.
Another equivalent definition is to define it as a distribution over deterministic decision trees.
is defined as the complexity of the lowest-depth randomized decision tree whose result is
is known as the Monte Carlo randomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error.
measures the expected depth of a decision tree that must be correct (i.e., has zero-error).
It measures the number of input bits that a nondeterministic algorithm would need to look at in order to evaluate the function with certainty.
The analogous notion where one only requires the verifier to be correct with 2/3 probability is denoted
, is defined as the depth of the lowest-depth quantum decision tree that gives the result
Finding the best upper bounds in the converse direction is a major goal in the field of query complexity.
Blum and Impagliazzo,[10] Hartmanis and Hemachandra,[11] and Tardos[12] independently discovered that
Noam Nisan found that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity:
A tighter relationship is known between the Monte Carlo and Las Vegas models:
,[17][18] improving a quartic bound due to Beals et al.[9] It is important to note that these polynomial relationships are valid only for total Boolean functions.
, so the conjecture is specifically concerned about finding a lower bound for sensitivity.
[13] In 2019, Hao Huang proved the sensitivity conjecture, showing that