Computability theory

Computability theory originated in the 1930s, with the work of Kurt Gödel, Alonzo Church, Rózsa Péter, Alan Turing, Stephen Kleene, and Emil Post.

[3][b] The fundamental results the researchers obtained established Turing computability as the correct formalization of the informal idea of effective calculation.

Although initially skeptical, by 1946 Gödel argued in favor of this thesis:[5]: 84 "Tarski has stressed in his lecture (and I think justly) the great importance of the concept of general recursiveness (or Turing's computability).

This result showed that there is no algorithmic procedure that can correctly decide whether arbitrary mathematical propositions are true or false.

[c] In 1947, Markov and Post published independent papers showing that the word problem for semigroups cannot be effectively decided.

In 1970, Yuri Matiyasevich proved (using results of Julia Robinson) Matiyasevich's theorem, which implies that Hilbert's tenth problem has no effective solution; this problem asked whether there is an effective procedure to decide whether a Diophantine equation over the integers has a solution in the integers.

The main form of computability studied in the field was introduced by Turing in 1936.

[13] An oracle Turing machine is a hypothetical device which, in addition to performing the actions of a regular Turing machine, is able to ask questions of an oracle, which is a particular set of natural numbers.

After ten years, Kleene and Post showed in 1954 that there are intermediate Turing degrees between those of the computable sets and the halting problem, but they failed to show that any of these degrees contains a computably enumerable set.

Very soon after this, Friedberg and Muchnik independently solved Post's problem by establishing the existence of computably enumerable sets of intermediate degree.

This groundbreaking result opened a wide study of the Turing degrees of the computably enumerable sets which turned out to possess a very complicated and non-trivial structure.

A survey by Ambos-Spies and Fejer[16] gives an overview of this research and its historical progression.

A Turing machine implementing a strong reducibility will compute a total function regardless of which oracle it is presented with.

The weakest such axiom studied in reverse mathematics is recursive comprehension, which states that the powerset of the naturals is closed under Turing reducibility.

To use this method, the desired properties of the set to be constructed are broken up into an infinite list of goals, known as requirements, so that satisfying all the requirements will cause the set constructed to have the desired properties.

It may happen that satisfying one requirement will cause another to become unsatisfied; the priority order is used to decide what to do in such an event.

Priority arguments have been employed to solve many problems in computability theory, and have been classified into a hierarchy based on their complexity.

For example, Kummer published a paper on a proof for the existence of Friedberg numberings without using the priority method.

Post's original motivation in the study of this lattice was to find a structural notion such that every set which satisfies this property is neither in the Turing degree of the computable sets nor in the Turing degree of the halting problem.

[20][16] The field of Kolmogorov complexity and algorithmic randomness was developed during the 1960s and 1970s by Chaitin, Kolmogorov, Levin, Martin-Löf and Solomonoff (the names are given here in alphabetical order; much of the research was independent, and the unity of the concept of randomness was not understood at the time).

The main idea is to consider a universal Turing machine U and to measure the complexity of a number (or string) x as the length of the shortest input p such that U(p) outputs x.

This approach revolutionized earlier ways to determine when an infinite sequence (equivalently, characteristic function of a subset of the natural numbers) is random or not by invoking a notion of randomness for finite objects.

[d] This branch of computability theory analyzed the following question: For fixed m and n with 0 < m < n, for which functions A is it possible to compute for any different n inputs x1, x2, ..., xn a tuple of n numbers y1, y2, ..., yn such that at least m of the equations A(xk) = yk are true.

After a long phase of research by Russian scientists, this subject became repopularized in the west by Beigel's thesis on bounded queries, which linked frequency computation to the above-mentioned bounded reducibilities and other related notions.

Many related models have been considered and also the learning of classes of computably enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards.

These studies include approaches to investigate the analytical hierarchy which differs from the arithmetical hierarchy by permitting quantification over sets of natural numbers in addition to quantification over individual numbers.

The field of mathematical logic dealing with computability and its generalizations has been called "recursion theory" since its early days.

[27] In 1967, Rogers[28] has suggested that a key property of computability theory is that its results and structures should be invariant under computable bijections on the natural numbers (this suggestion draws on the ideas of the Erlangen program in geometry).

The main professional organization for computability theory is the Association for Symbolic Logic, which holds several research conferences each year.

The interdisciplinary research Association Computability in Europe (CiE) also organizes a series of annual conferences.