Computable function

Before the precise definition of computable function, mathematicians often used the informal term effectively calculable.

In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input.

One way to describe it is to say that a function is computable if its value can be obtained by an effective procedure.

[1] In agreement with this definition, the remainder of this article presumes that computable functions take finitely many natural numbers as arguments and produce a value which is a single natural number.

As counterparts to this informal description, there exist multiple formal, mathematical definitions.

[2] These functions are sometimes referred to as "recursive", to contrast with the informal term "computable",[3] a distinction stemming from a 1934 discussion between Kleene and Gödel.

The fact that these models give equivalent classes of computable functions stems from the fact that each model is capable of reading and mimicking a procedure for any of the other models, much as a compiler is able to read instructions in one computer language and emit instructions in another language.

A set of natural numbers is called computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function f such that for each number n, f(n) is defined if and only if n is in the set.

The word enumerable is used because the following are equivalent for a nonempty subset B of the natural numbers: If a set B is the range of a function f then the function can be viewed as an enumeration of B, because the list f(0), f(1), ... will include every element of B.

Some coding system must be developed to allow a computable function to take an arbitrary word in the language as input; this is usually considered routine.

This is permitted because it is believed that all such uses of the thesis can be removed by the tedious process of writing a formal procedure for the function in some model of computation.

Given a function (or, similarly, a set), one may be interested not only if it is computable, but also whether this can be proven in a particular proof system (usually first order Peano arithmetic).

[5] For definitions of this type to avoid circularity or infinite regress, it is necessary that recursive calls to the same function within a definition be to arguments that are smaller in some well-partial-order on the function's domain.

Any function defined recursively in a well-ordered way is computable: each value can be computed by expanding a tree of recursive calls to the function, and this expansion must terminate after a finite number of calls, because otherwise Kőnig's lemma would lead to an infinite descending sequence of calls, violating the assumption of well-ordering.

If the total computable functions are enumerated via the Turing machines that produces them, then the above statement can be shown, if the proof system is sound, by a similar diagonalization argument to that used above, using the enumeration of provably total functions given earlier.

One uses a Turing machine that enumerates the relevant proofs, and for every input n calls fn(n) (where fn is n-th function by this enumeration) by invoking the Turing machine that computes it according to the n-th proof.

Such a Turing machine is guaranteed to halt if the proof system is sound.

For example, functions may be encoded using a string of bits (the alphabet Σ = {0, 1}).

The set of finitary functions on the natural numbers is uncountable so most are not computable.

The Entscheidungsproblem, proposed by David Hilbert, asked whether there is an effective procedure to determine which mathematical statements (coded as natural numbers) are true.

Turing and Church independently showed in the 1930s that this set of natural numbers is not computable.

The notion of computability of a function can be relativized to an arbitrary set of natural numbers A.

This is commonly accomplished by supplementing the model of computation with an additional primitive operation which asks whether a given integer is a member of A.

This is equivalent to sets defined by both a universal and existential formula in the language of second order arithmetic and to some models of Hypercomputation.