In computer science, a family of hash functions is said to be k-independent, k-wise independent or k-universal[1] if selecting a function at random from the family guarantees that the hash codes of any designated k keys are independent random variables (see precise mathematical definitions below).
The trade-offs between the degree of independence and the efficiency of evaluating the hash function are well studied, and many k-independent families have been proposed.
The goal of hashing is usually to map keys from some large domain (universe)
For instance, if the hash code of each key were an independent random choice in
, the number of keys per bin could be analyzed using the Chernoff bound.
The first definition along these lines was universal hashing, which guarantees a low collision probability for any two designated keys.
-independent hashing, introduced by Wegman and Carter in 1981,[2] strengthens the guarantees of random behavior to families of
designated keys, and adds a guarantee on the uniform distribution of hash codes.
The strictest definition, introduced by Wegman and Carter[2] under the name "strongly universal
, we have: This definition is equivalent to the following two conditions: Often it is inconvenient to achieve the perfect joint probability of
Therefore, a more common alternative to dealing with rounding issues is to prove that the hash family is close in statistical distance to a
The original technique for constructing k-independent hash functions, given by Carter and Wegman, was to select a large prime number p, choose k random numbers modulo p, and use these numbers as the coefficients of a polynomial of degree k − 1 whose values modulo p are used as the value of the hash function.
All polynomials of the given degree modulo p are equally likely, and any polynomial is uniquely determined by any k-tuple of argument-value pairs with distinct arguments, from which it follows that any k-tuple of distinct arguments is equally likely to be mapped to any k-tuple of hash values.
, which supports fast finite field arithmetic on modern computers.
This was the approach taken by Daniel Lemire and Owen Kaser for CLHash.
[4] Tabulation hashing is a technique for mapping keys to hash values by partitioning each key into bytes, using each byte as the index into a table of random numbers (with a different table for each byte position), and combining the results of these table lookups by a bitwise exclusive or operation.
Thus, it requires more randomness in its initialization than the polynomial method, but avoids possibly-slow multiplication operations.
[5] Variations of tabulation hashing can achieve higher degrees of independence by performing table lookups based on overlapping combinations of bits from the input key, or by applying simple tabulation hashing iteratively.
[6][7] The notion of k-independence can be used to differentiate between different collision resolution in hashtables, according to the level of independence required to guarantee constant expected time per operation.
For instance, hash chaining takes constant expected time even with a 2-independent family of hash functions, because the expected time to perform a search for a given key is bounded by the expected number of collisions that key is involved in.
Because the terms of this sum only involve probabilistic events involving two keys, 2-independence is sufficient to ensure that this sum has the same value that it would for a truly random hash function.
It is a form of open addressing that uses two hash functions: one to determine the start of a probe sequence, and the other to determine the step size between positions in the probe sequence.
As long as both of these are 2-independent, this method gives constant expected time per operation.
[8] On the other hand, linear probing, a simpler form of open addressing where the step size is always one can be guaranteed to work in constant expected time per operation with a 5-independent hash function,[9] and there exist 4-independent hash functions for which it takes logarithmic time per operation.
A third approach from 2014[13] is to slightly modify the cuckoo hashtable with a so-called stash, which makes it possible to use nothing more than 2-independent hash functions.
Kane, Nelson and David Woodruff improved the Flajolet–Martin algorithm for the Distinct Elements Problem in 2010.
The Count sketch algorithm for dimensionality reduction requires two hash functions, one 2-independent and one 4-independent.