Algorithm characterizations

But most agree that algorithm has something to do with defining generalized processes for the creation of "output" integers from other "input" integers – "input parameters" arbitrary and infinite in extent, or limited in extent but still variable—by the manipulation of distinguishable symbols (counting numbers) with finite collections of rules that a person can perform with paper and pencil.

When we are doing "arithmetic" we are really calculating by the use of "recursive functions" in the shorthand algorithms we learned in grade school, for example, adding and subtracting.

[ A mathematical problem and its result can be considered as two points in a space, and the solution consists of a sequence of steps or a path linking them.

By means of what Couturat (1914) called a "sort of logical piano [,] ... the equalities which represent the premises ... are "played" on a keyboard like that of a typewriter.

But of historical use to the developing notion of "algorithm" is his explanation for his negative reaction with respect to a machine that "may subserve a really valuable purpose by enabling us to avoid otherwise inevitable labor": He concludes that "I cannot see that any machine can hope to help us except in the third of these steps; so that it seems very doubtful whether any thing of this sort really deserves the name of a logical engine.

This section is longer and more detailed than the others because of its importance to the topic: Kleene was the first to propose that all calculations/computations—of every sort, the totality of—can equivalently be (i) calculated by use of five "primitive recursive operators" plus one special operator called the mu-operator, or be (ii) computed by the actions of a Turing machine or an equivalent model.

A "function" can be thought of as an "input-output box" into which a person puts natural numbers called "arguments" or "parameters" (but only the counting numbers including 0—the nonnegative integers) and gets out a single nonnegative integer (conventionally called "the answer").

He would later amplify it in his text (1952) as follows: (His use of the word "decision" and "predicate" extends the notion of calculability to the more general manipulation of symbols such as occurs in mathematical "proofs".)

Kleene et al. (cf §55 General recursive functions p. 270 in Kleene 1952) had to add a sixth recursion operator called the minimization-operator (written as μ-operator or mu-operator) because Ackermann (1925) produced a hugely growing function—the Ackermann function—and Rózsa Péter (1935) produced a general method of creating recursive functions using Cantor's diagonal argument, neither of which could be described by the 5 primitive-recursive-function operators.

His 1954 monograph was his attempt to define algorithm more accurately; he saw his resulting definition—his "normal" algorithm—as "equivalent to the concept of a recursive function" (p. 3).

Ian Stewart (cf Encyclopædia Britannica) shares a similar belief: "...constructive analysis is very much in the same algorithmic spirit as computer science...".

Gandy's "Fourth Principle for Mechanisms" is "The Principle of Local Causality": 1936: A rather famous quote from Kurt Gödel appears in a "Remark added in proof [of the original German publication] in his paper "On the Length of Proofs" translated by Martin Davis appearing on pp.

-- have quoted the following: 1963: In a "Note" dated 28 August 1963 added to his famous paper On Formally Undecidable Propositions (1931) Gödel states (in a footnote) his belief that "formal systems" have "the characteristic property that reasoning in them, in principle, can be completely replaced by mechanical devices" (p. 616 in van Heijenoort).

1964: In a Postscriptum, dated 1964, to a paper presented to the Institute for Advanced Study in spring 1934, Gödel amplified his conviction that "formal systems" are those that can be mechanized: The * indicates a footnote in which Gödel cites the papers by Alan Turing (1937) and Emil Post (1936) and then goes on to make the following intriguing statement: Church's definitions encompass so-called "recursion" and the "lambda calculus" (i.e. the λ-definable functions).

To avoid the "cumbersome" process of "having to do this over again for each individual procedure" he hopes to identify a "reasonably uniform family of rule-obeying mechanisms".

He then goes on to describe the notion "in approximate and intuitive terms" as having 10 "features", 5 of which he asserts that "virtually all mathematicians would agree [to]" (p. 2).

The 5 "obvious" are: The remaining 5 that he opens to debate, are: Knuth (1968, 1973) has given a list of five properties that are widely accepted as requirements for an algorithm: Knuth offers as an example the Euclidean algorithm for determining the greatest common divisor of two natural numbers (cf.

He makes an effort in this direction in his first volume where he defines in detail what he calls the "machine language" for his "mythical MIX...the world's first polyunsaturated computer" (pp. 120ff).

Early in the paper, Q states his reading of Moshovakis: But the authors waffle here, saying "[L]et's stick to "algorithm" and "machine", and the reader is left, again, confused.

[cf Kleene (1952) p. 375 where he shows an example of a tape with 6 symbols on it—all other squares are blank—and how to Gödelize its combined table-tape status].

Philosopher Daniel Dennett analyses the importance of evolution as an algorithmic process in his 1995 book Darwin's Dangerous Idea.

Daniel Dennett is a proponent of strong artificial intelligence: the idea that the logical structure of an algorithm is sufficient to explain mind.

Searle cautions those who claim that algorithmic (computational) processes are intrinsic to nature (for example, cosmologists, physicists, chemists, etc.

(In the following, boldface is added for emphasis): This specification is incomplete: it requires the location of where the instructions are to be placed and their format in the machine-- This later point is important.

First, the input must be encoded as a string (p. 157) and says of numeric encodings in the context of complexity theory: Van Emde Boas comments on a similar problem with respect to the random-access machine (RAM) abstract model of computation sometimes used in place of the Turing machine when doing "analysis of algorithms": "The absence or presence of multiplicative and parallel bit manipulation operations is of relevance for the correct understanding of some results in the analysis of algorithms.

[T]here hardly exists such as a thing as an "innocent" extension of the standard RAM model in the uniform time measures; either one only has additive arithmetic or one might as well include all reasonable multiplicative and/or bitwise Boolean instructions on small operands."

(Van Emde Boas, 1990:26) With regard to a "description language" for algorithms Sipser finishes the job that Stone and Boolos-Burgess-Jeffrey started (boldface added).

The conditions that describe when two programs are equivalent turn out to be coherence relations which give the extra structure to the category of algorithms.

In Seiller (2024)[4] an algorithm is defined as an edge-labelled graph, together with an interpretation of labels as maps in an abstract data structure.

Another important feature of the approach is that it takes into account the fact that a given algorithm can be implemented in different (and possibly unrelated) computational models.