Fuzzy extractor

Fuzzy extractors are a method that allows biometric data to be used as inputs to standard cryptographic techniques, to enhance computer security.

"Fuzzy", in this context, refers to the fact that the fixed values required for cryptography will be extracted from values close to but not identical to the original key, without compromising the security required.

has been calculated, it can be used, for example, for key agreement between a user and a server based only on a biometric input.

These are order invariant for the fuzzy commitment scheme and use a Reed–Solomon error correction code.

These techniques can also have other broader applications for other type of noisy inputs such as approximative data from human memory, images used as passwords, and keys from quantum channels.

[2] Fuzzy extractors also have applications in the proof of impossibility of the strong notions of privacy with regard to statistical databases.

, is high: Fuzzy extractors do not recover the original input but generate a string

So Fuzzy extractors output almost uniform random sequences of bits which are a prerequisite for using cryptographic applications (as secret keys).

Since the output bits are slightly non-uniform, there's a risk of a decreased security; but the distance from a uniform distribution is no more than

As long as this distance is sufficiently small, the security will remain adequate.

The cited paper includes many generic combinatorial bounds on secure sketches and fuzzy extractors.

[2] Due to their error-tolerant properties, secure sketches can be treated, analyzed, and constructed like a

The first step in constructing a secure sketch is determining the type of errors that will likely occur and then choosing a distance to measure.

When there is no risk of data being deleted and only of its being corrupted, then the best measurement to use for error correction is the Hamming distance.

There are two common constructions for correcting Hamming errors, depending on whether the code is linear or not.

This construction can achieve the best possible tradeoff between error tolerance and entropy loss when

This allows potential code words to exceed the Plotkin bound, which has a limit of

errors, since this model can account for all complex noise processes, meaning that Shannon's bound can be reached; to do this a random permutation is prepended to the secure sketch that will reduce entropy loss.

and the secure sketch, and an adversary is limited to polynomial-time algorithms for introducing errors.

In general, a secure system attempts to leak as little information as possible to an adversary.

For example, an adversary notices that there is a certain pattern in the helper strings that implies the ethnicity of the user.

The best possible solution is to make sure an adversary can't learn anything useful from the helper string.

The leakage is the difference in probability two adversaries have of guessing some function, when one knows the probabilistic map and one does not.

is supposed to be kept secret; so, even if it is leaked (which should be very unlikely)m the adversary can still figure out nothing useful about the subject, as long as

Since secure sketches imply fuzzy extractors, constructing a uniform secure sketch allows for the easy construction of a uniform fuzzy extractor.

Since randomness extractors output a string that appears to be from a uniform distribution, they hide all information about their input.

errors, compared to traditional hash functions which only accept when the input matches the original exactly.

Fuzzy perfectly one-way hash functions make an analogous claim.

Robust fuzzy extractors solve this problem by allowing the reproduce function to fail, if a modified helper string is provided as input.

One method of constructing robust fuzzy extractors is to use hash functions.

Red is the code-offset construction, blue is the syndrome construction, and green represents edit distance and other complex constructions.