A major goal in cryptography is to create cryptographic primitives with provable security.
In some cases, cryptographic protocols are found to have information theoretic security; the one-time pad is a common example.
Computational hardness assumptions are also useful for guiding algorithm designers: a simple algorithm is unlikely to refute a well-studied computational hardness assumption such as P ≠ NP.
Computer scientists have different ways of assessing which hardness assumptions are more reliable.
Thus when devising cryptographic protocols, one hopes to be able to prove security using the weakest possible assumptions.
[4] Roughly, a computational hardness assumption is said to be falsifiable if it can be formulated in terms of a challenge: an interactive protocol between an adversary and an efficient verifier, where an efficient adversary can convince the verifier to accept if and only if the assumption is false.
It is a major open problem to find an algorithm for integer factorization that runs in time polynomial in the size of representation (
The security of many cryptographic protocols rely on the assumption that integer factorization is hard (i.e. cannot be solved in polynomial time).
Many more cryptosystems rely on stronger assumptions such as RSA, residuosity problems, and phi-hiding.
Some cryptosystems that rely on the hardness of residuousity problems include: For a composite number
The discrete log problem is not known to be comparable to integer factorization, but their computational complexities are closely related.
Most cryptographic protocols related to the discrete log problem actually rely on the stronger Diffie–Hellman assumption: given group elements
Examples of protocols that use this assumption include the original Diffie–Hellman key exchange, as well as the ElGamal encryption (which relies on the yet stronger Decisional Diffie–Hellman (DDH) variant).
many constructions have been proposed in recent years, but many of them have also been broken, and currently there is no consensus about a safe candidate.
[8] Some cryptosystems that rely on multilinear hardness assumptions include: The most fundamental computational problem on lattices is the shortest vector problem (SVP): given a lattice
Most cryptosystems require stronger assumptions on variants of SVP, such as shortest independent vectors problem (SIVP), GapSVP,[10] or Unique-SVP.
[11] The most useful lattice hardness assumption in cryptography is for the learning with errors (LWE) problem: Given samples to
The errors are believed to make the problem intractable (for appropriate parameters); in particular, there are known worst-case to average-case reductions from variants of SVP.
Some cryptosystems that rely on hardness of lattice problems include: As well as their cryptographic applications, hardness assumptions are used in computational complexity theory to provide evidence for mathematical statements that are difficult to prove unconditionally.
The best-known assumption of this type is the assumption that P ≠ NP,[14] but others include the exponential time hypothesis,[15] the planted clique conjecture, and the unique games conjecture.
[16] Many worst-case computational problems are known to be hard or even complete for some complexity class
-hard (with respect to polynomial time reductions), then it cannot be solved by a polynomial-time algorithm unless the computational hardness assumption
[17] An even stronger assumption, known as the strong exponential time hypothesis (SETH) conjectures that
ETH, SETH, and related computational hardness assumptions allow for deducing fine-grained complexity results, e.g. results that distinguish polynomial time and quasi-polynomial time,[1] or even
[19] Some computational problems are assumed to be hard on average over a particular distribution of instances.
[20] Another important example is Feige's Hypothesis, which is a computational hardness assumption about random instances of 3-SAT (sampled to maintain a specific ratio of clauses to variables).
[22] Additionally, the planted clique hardness assumption has also been used to distinguish between polynomial and quasi-polynomial worst-case time complexity of other problems,[23] similarly to the exponential time hypothesis.
In particular, assuming UGC there is a semidefinite programming algorithm that achieves optimal approximation guarantees for many important problems.
Hence, the small set expansion hypothesis, which postulates that SSE is hard to approximate, is a stronger (but closely related) assumption than the unique game conjecture.
This conjecture is useful for proving near-quadratic lower bounds for several problems, mostly from computational geometry.