In computer science, the count-distinct problem[1] (also known in applied mathematics as the cardinality estimation problem) is the problem of finding the number of distinct elements in a data stream with repeated elements.
This is a well-known problem with numerous applications.
The elements might represent IP addresses of packets passing through a router, unique visitors to a web site, elements in a large database, motifs in a DNA sequence, or elements of RFID/sensor networks.
An example of an instance for the cardinality estimation problem is the stream:
The naive solution to the problem is as follows: As long as the number of distinct elements is not too big, D fits in main memory and an exact answer can be retrieved.
However, this approach does not scale for bounded storage, or if the computation performed for each element
In such a case, several streaming algorithms have been proposed that use a fixed number of storage units.
To handle the bounded storage constraint, streaming algorithms use a randomization to produce a non-exact estimation of the distinct number of elements,
State-of-the-art estimators hash every element
into a low-dimensional data sketch using a hash function,
The different techniques can be classified according to the data sketches they store.
Min/max sketches[2][3] store only the minimum/maximum hashed values.
Examples of known min/max sketch estimators: Chassaing et al.[4] presents max sketch which is the minimum-variance unbiased estimator for the problem.
The estimator of choice in practice is the HyperLogLog algorithm.
[6] The intuition behind such estimators is that each sketch carries information about the desired quantity.
Thus, the existence of duplicates does not affect the value of the extreme order statistics.
The first paper on count-distinct estimation[7] describes the Flajolet–Martin algorithm, a bit pattern sketch.
In this case, the elements are hashed into a bit vector and the sketch holds the logical OR of all hashed values.
The first asymptotically space- and time-optimal algorithm for this problem was given by Daniel M. Kane, Jelani Nelson, and David P.
See Cosma et al.[2] for a theoretical overview of count-distinct estimation algorithms, and Metwally [10] for a practical overview with comparative simulation results.
Compared to other approximation algorithms for the count-distinct problem the CVM Algorithm[11] (named by Donald Knuth after the initials of Sourav Chakraborty, N. V. Vinodchandran, and Kuldeep S. Meel) uses sampling instead of hashing.
The CVM Algorithm provides an unbiased estimator for the number of distinct elements in a stream,[12] in addition to the standard (ε-δ) guarantees.
Below is the CVM algorithm, including the slight modification by Donald Knuth.
[12] The previous version of the CVM algorithm is improved with the following modification by Donald Knuth, that adds the while loop to ensure B is reduced.
Formally, An example of an instance for the weighted problem is:
represents the total load imposed on the server by all the flows to which packets
Any extreme order statistics estimator (min/max sketches) for the unweighted problem can be generalized to an estimator for the weighted problem .
[13] For example, the weighted estimator proposed by Cohen et al.[5] can be obtained when the continuous max sketches estimator is extended to solve the weighted problem.
In particular, the HyperLogLog algorithm[6] can be extended to solve the weighted problem.
The extended HyperLogLog algorithm offers the best performance, in terms of statistical accuracy and memory usage, among all the other known algorithms for the weighted problem.