Flajolet–Martin algorithm

The Flajolet–Martin algorithm is an algorithm for approximating the number of distinct elements in a stream with a single pass and space-consumption logarithmic in the maximal number of possible distinct elements in the stream (the count-distinct problem).

The algorithm was introduced by Philippe Flajolet and G. Nigel Martin in their 1984 article "Probabilistic Counting Algorithms for Data Base Applications".

[1] Later it has been refined in "LogLog counting of large cardinalities" by Marianne Durand and Philippe Flajolet,[2] and "HyperLogLog: The analysis of a near-optimal cardinality estimation algorithm" by Philippe Flajolet et al.[3] In their 2010 article "An optimal algorithm for the distinct elements problem",[4] Daniel M. Kane, Jelani Nelson and David P. Woodruff give an improved algorithm, which uses nearly optimal space and has optimal O(1) update and reporting times.

Assume that we are given a hash function

, and where the outputs are sufficiently uniformly distributed.

Note that the set of integers from 0 to

corresponds to the set of binary strings of length

-th bit in the binary representation of

that outputs the position of the least-significant set bit in the binary representation of

if no such set bit can be found as all bits are zero:

Note that with the above definition we are using 0-indexing for the positions, starting from the least significant bit.

, since the least significant bit is a 1 (0th position), and

, since the least significant set bit is at the 3rd position.

At this point, note that under the assumption that the output of our hash function is uniformly distributed, then the probability of observing a hash output ending with

heads and then a tail with a fair coin.

Now the Flajolet–Martin algorithm for estimating the cardinality of a multiset

is the number of distinct elements in the multiset

is found by calculations, which can be found in the original article.

A problem with the Flajolet–Martin algorithm in the above form is that the results vary significantly.

A common solution has been to run the algorithm multiple times with

different hash functions and combine the results from the different runs.

results together from each hash function, obtaining a single estimate of the cardinality.

The problem with this is that averaging is very susceptible to outliers (which are likely here).

A different idea is to use the median, which is less prone to be influences by outliers.

The problem with this is that the results can only take form

A common solution is to combine both the mean and the median: Create

hash functions and split them into

distinct groups (each of size

results, and finally take the median of the

[5] The 2007 HyperLogLog algorithm splits the multiset into subsets and estimates their cardinalities, then it uses the harmonic mean to combine them into an estimate for the original cardinality.