Halting problem

A key part of the formal statement of the problem is a mathematical definition of a computer and program, usually via a Turing machine.

The behavior of f on g shows undecidability as it means no program f will solve the halting problem in every possible case.

The essence of Turing's proof is that any such algorithm can be made to produce contradictory output and therefore cannot be correct.

[3] Sometimes these programmers use some general-purpose (Turing-complete) programming language, but attempt to write in a restricted style—such as MISRA C or SPARK—that makes it easy to prove that the resulting subroutines finish before the given deadline.

[citation needed] Other times these programmers apply the rule of least power—they deliberately use a computer language that is not quite fully Turing-complete.

[citation needed] The difficulty in the halting problem lies in the requirement that the decision procedure must work for all programs and inputs.

The halting problem is theoretically decidable for linear bounded automata (LBAs) or deterministic machines with finite memory.

A machine with finite memory has a finite number of configurations, and thus any deterministic program on it must eventually either halt or repeat a previous configuration:[4] ...any finite-state machine, if left completely to itself, will fall eventually into a perfectly periodic repetitive pattern.

Even if such a machine were to operate at the frequencies of cosmic rays, the aeons of galactic evolution would be as nothing compared to the time of a journey through such a cycle:Although a machine may be finite, and finite automata "have a number of theoretical limitations":[5] ...the magnitudes involved should lead one to suspect that theorems and arguments based chiefly on the mere finiteness [of] the state diagram may not carry a great deal of significance.It can also be decided automatically whether a nondeterministic machine with finite memory halts on none, some, or all of the possible sequences of nondeterministic decisions, by enumerating states after each possible decision.

In April 1936, Alonzo Church published his proof of the undecidability of a problem in the lambda calculus.

[25] A search of the academic literature from 1936 to 1958 showed that the first published material using the term “halting problem” was Rogers (1957).

[23] The usage in Davis's book is as follows:[27] "[...] we wish to determine whether or not [a Turing machine] Z, if placed in a given initial state, will eventually halt.

However, the result is in no way specific to them; it applies equally to any other model of computation that is equivalent in its computational power to Turing machines, such as Markov algorithms, Lambda calculus, Post systems, register machines, or tag systems.

The conventional representation of decision problems is the set of objects possessing the property in question.

Examples of such sets include: Christopher Strachey outlined a proof by contradiction that the halting problem is not solvable.

One may visualize a two-dimensional array with one column and one row for each natural number, as indicated in the table above.

For example, there cannot be a general algorithm that decides whether a given statement about natural numbers is true or false.

The reason for this is that the proposition stating that a certain program will halt given a certain input can be converted into an equivalent statement about natural numbers.

There are some heuristics that can be used in an automated fashion to attempt to construct a proof, which frequently succeed on typical programs.

Some results have been established on the theoretical performance of halting problem heuristics, in particular the fraction of programs of a given size that may be correctly classified by a recursive algorithm.

These results do not give precise numbers because the fractions are uncomputable and also highly dependent on the choice of program encoding used to determine "size".

For example, consider classifying programs by their number of states and using a specific "Turing semi-infinite tape" model of computation that errors (without halting) if the program runs off the left side of the tape.

There are infrequently occurring new varieties of programs that come in arbitrarily large "blocks", and a constantly growing fraction of repeats.

In particular a "tally" heuristic that simply remembers the first N inputs and recognizes their equivalents allows reaching an arbitrarily low error rate infinitely often.

In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem.

This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible.

The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers.

[37] Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers.

Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.

The above argument is a reduction of the halting problem to PHS recognition, and in the same manner, harder problems such as halting on all inputs can also be reduced, implying that PHS recognition is not only undecidable, but higher in the arithmetical hierarchy, specifically