The median trick is a generic approach that increases the chances of a probabilistic algorithm to succeed.
[1] Apparently first used in 1986[2] by Jerrum et al.[3] for approximate counting algorithms, the technique was later applied to a broad selection of classification and regression problems.
[2] The idea of median trick is very simple: run the randomized algorithm with numeric output multiple times, and use the median of the obtained results as a final answer.
For example, for sublinear in time algorithms the same algorithm can be run repeatedly (or in parallel) over random subsets of input data, and, per Chernoff inequality, the median of the results will converge to solution very fast.
[4] For the algorithms that are sublinear in space (e.g., counting the distinct elements of a stream), different randomizations of the algorithm (say, with different hash functions) should be used for repeated runs over the same data.
[5] Given a set of independent random variables
, and an unknown deterministic number
Suppose that each random variable
± ϵ ]
with probability
is a constant, then the median trick states that
with probability
In other words, in order to ensure that
with probability
be the indicator variable for the event that
Then, the event
fails to occur only if at least half of
By Hoeffding's inequality, this event occurs with probability
This algorithms or data structures-related article is a stub.
You can help Wikipedia by expanding it.