Unbounded nondeterminism became an important issue in the development of the denotational semantics of concurrency, and later became part of research into the theoretical concept of hypercomputation[1].
An example of the role of fair or unbounded nondeterminism in the merging of strings was given by William D. Clinger, in his 1981 thesis.
He concluded that: Edsger Dijkstra argued that it is impossible to implement systems with unbounded nondeterminism[3].
Gordon Plotkin gave a proof in his original paper on powerdomains: William Clinger provided the following analysis of the above proof by Plotkin: Spaan et al. have argued that it is possible for an unboundedly nondeterministic program to solve the halting problem; their algorithm consists of two parts defined as follows:[6] The first part of the program requests a natural number from the second part; after receiving it, it will iterate the desired Turing machine for that many steps, and accept or reject according to whether the machine has yet halted.
The fairness constraint requires that the request eventually be serviced, for otherwise there is an infinite loop in which only the "increment the variable" branch is ever taken.
Hewitt justified his use of unbounded nondeterminism by arguing that there is no bound that can be placed on how long it takes a computational circuit called an arbiter to settle (see metastability in electronics).
[10] Hewitt argued that issues in fairness derive in part from the global state point of view.
The oldest models of computation (e.g.. Turing machines, Post productions, the lambda calculus, etc.)