Random-access machine

The 'registers' are intuitively equivalent to main memory of a common computer, except for the additional ability of registers to store natural numbers of any size.

It is an example of the so-called von Neumann architecture and is closest to the common notion of a computer.

An RA-machine consists of the following: For a description of a similar concept, but humorous, see the esoteric programming language Brainfuck.

The first enhances the machine with the convenience of indirect addressing; the second moves the model toward the more conventional accumulator-based computer with the addition of one or more auxiliary (dedicated) registers, the most common of which is called "the accumulator".

Finally, a conditional-expression in the form of an IF-THEN-ELSE is available to test the contents of one or two registers and "branch/jump" the finite-state machine out of the default instruction-sequence.

Moreover, from base sets 1, 2, or 3 we can create any of the primitive recursive functions ( cf Minsky (1967), Boolos-Burgess-Jeffrey (2002) ).

(How to cast the net wider to capture the total and partial mu recursive functions will be discussed in context of indirect addressing).

The problem appears most dramatically when one tries to use a counter-machine model to build a RASP that is Turing equivalent and thus compute any partial mu recursive function: So how do we address a register beyond the bounds of the finite-state machine?

This is how Minsky solves the problem, but the Gödel numbering he uses represents a great inconvenience to the model, and the result is nothing at all like our intuitive notion of a "stored program computer".

In the context of a more computer-like model using his RPT (repeat) instruction Minsky (1967) tantalizes us with a solution to the problem (cf p. 214, p. 259) but offers no firm resolution.

He asserts: He offers us a bounded RPT that together with CLR (r) and INC (r) can compute any primitive recursive function, and he offers the unbounded RPT quoted above that as playing the role of μ operator; it together with CLR (r) and INC (r) can compute the mu recursive functions.

From the references in Hartmanis (1971) it appears that Cook (in his lecture notes while at UC Berkeley, 1970) has firmed up the notion of indirect addressing.

For this to work, in general, the unbounded register requires an ability to be cleared and then incremented (and, possibly, decremented) by a potentially infinite loop.

If we eschew the Minsky approach of one monster number in one register, and specify that our machine model will be "like a computer" we have to confront this problem of indirection if we are to compute the recursive functions (also called the μ-recursive functions ) – both total and partial varieties.

Our simpler counter-machine model can do a "bounded" form of indirection – and thereby compute the sub-class of primitive recursive functions – by using a primitive recursive "operator" called "definition by cases" (defined in Kleene (1952) p. 229 and Boolos-Burgess-Jeffrey p. 74).

To be Turing equivalent the counter machine needs to either use the unfortunate single-register Minsky Gödel number method, or be augmented with an ability to explore the ends of its register string, ad infinitum if necessary.

(A failure to find something "out there" defines what it means for an algorithm to fail to terminate; cf Kleene (1952) pp.

This where can be either (i) the state machine's instruction that provides an explicit label, or (ii) the pointer-register whose contents is the address of interest.

Worse, every two parameter/register instruction will have 4 possible varieties, e.g.: In a similar manner every three-register instruction that involves two source registers rs1 rs2 and a destination register rd will result in 8 varieties, for example the addition: will yield: If we designate one register to be the "accumulator" (see below) and place strong restrictions on the various instructions allowed then we can greatly reduce the plethora of direct and indirect operations.

However, one must be sure that the resulting reduced instruction-set is sufficient, and we must be aware that the reduction will come at the expense of more instructions per "significant" operation.

Tradition (e.g. Knuth's (1973) imaginary MIX computer) uses two names called LOAD and STORE.

Both instruction-types deposit the result (e.g. sum, difference, product, quotient or remainder) in the accumulator.

For maximum flexibility, as we have done for the accumulator A – we will consider N just another register subject to increment, decrement, clear, test, direct copy, etc.

Rename the COPY instructions and call INC (N) = RIGHT, DEC (N) = LEFT and we have the same instructions as the Post-Turing machine, plus an extra CLRN : In the section above we informally showed that a RAM with an unbounded indirection capability produces a Post–Turing machine.

The "head" can be thought of as being in the conditional jump – observe that it uses indirect addressing (cf Elgot-Robinson p. 398).

The fact that our tape is left-ended presents us with a minor problem: Whenever LEFT occurs our instructions will have to test to determine whether or not the contents of "N" is zero; if so we should leave its count at "0" (this is our choice as designers – for example we might have the machine/model "trigger an event" of our choosing).

is shown shaded: Throughout this demonstration we have to keep in mind that the instructions in the finite-state machine's TABLE is bounded, i.e. finite: We will build the indirect CPY ( i, q, d, φ ) with the CASE operator.