Computational complexity

[1] Particular focus is given to computation time (generally measured by the number of needed elementary operations) and memory storage requirements.

Time complexity is generally expressed as the number of required elementary operations on an input of size n, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer.

Space complexity is generally expressed as the amount of memory required by an algorithm on an input of size n. The resource that is most commonly considered is time.

are not used in complexity theory because they are too dependent on the choice of a specific computer and on the evolution of technology.

These operations are assumed to take constant time (that is, not affected by the size of the input) on a given machine, and are often called steps.

With most models of computation, it equals the time complexity up to a constant factor.

On computers, the number of operations on machine words that are needed is also proportional to the bit complexity.

Another important resource is the size of computer memory that is needed for running algorithms.

For the class of distributed algorithms that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity.

For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time.

On the other hand, if these algorithms are coupled with multi-modular arithmetic, the bit complexity may be reduced to O~(n4).

In sorting and searching, the resource that is generally considered is the number of entry comparisons.

This is generally a good measure of the time complexity if data are suitably organized.

It is generally difficult to compute precisely the worst-case and the average-case complexity.

The evaluation of the complexity relies on the choice of a model of computation, which consists in defining the basic operations that are done in a unit of time.

When the model of computation is not explicitly specified, it is generally implicitely assumed as being a multitape Turing machine, since several more realistic models of computation, such as random-access machines are asymptotically equivalent for most problems.

Historically, the first deterministic models were recursive functions, lambda calculus, and Turing machines.

The model of random-access machines (also called RAM-machines) is also widely used, as a closer counterpart to real computers.

When the model of computation is not specified, it is generally assumed to be a multitape Turing machine.

For most algorithms, the time complexity is the same on multitape Turing machines as on RAM-machines, although some care may be needed in how data is stored in memory to get this equivalence.

This parallelism is partly amenable to quantum computing via superposed entangled states in running specific quantum algorithms, like e.g. Shor's factorization of yet only small integers (as of March 2018[update]: 21 = 3 × 7).

Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the P = NP problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds.

Simulating an NP-algorithm on a deterministic computer usually takes "exponential time".

A problem is in the complexity class NP, if it may be solved in polynomial time on a non-deterministic machine.

As of 2017[update] it is generally conjectured that P ≠ NP, with the practical implication that the worst cases of NP problems are intrinsically difficult to solve, i.e., take longer than any reasonable time span (decades!)

It is used in post-quantum cryptography, which consists of designing cryptographic protocols that are resistant to attacks by quantum computers.

This is the method that is used to prove that, if P ≠ NP (an unsolved conjecture), the complexity of every NP-complete problem is

It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of Moore's law, which posits the exponential growth of the power of modern computers.

For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the bibliography of a book, any algorithm should work well in less than a second.

On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require