Selman's theorem

In computability theory, Selman's theorem is a theorem relating enumeration reducibility with enumerability relative to oracles.

It is named after Alan Selman, who proved it as part of his PhD thesis in 1971.

Informally, a set A is enumeration-reducible to a set B if there is a Turing machine which receives an enumeration of B (it has a special instruction to get the next element, or none if it has not yet been provided), and produces an enumeration of A.

See enumeration reducibility for a precise account.

Selman's theorem:[1][2][3][4][5] A set A is enumeration-reducible to a set B if and only if A is computably enumerable with an oracle X whenever B is computably enumerable with the same oracle X.

A priori, the program enumerating A could be running the program enumerating B as a subprogram in order to produce the elements of A from those of B, but it could also be using the source of information directly, perhaps in a different way than the program enumerating B, and it could be difficult to deduce from the program enumerating B.

From a slightly different point of view, the theorem is an automatic uniformity result.

such that the range of f with ⊥ removed equals A, and let Q be similarly defined for B.

We shall build X such that B is computably enumerable with oracle X, but A is not.

This ensures that B is computably enumerable with oracle X (through a semi-algorithm that takes an input x and searches for y such that

It is convenient to view the eventual value of X as an infinite bit string (i-th bit is the boolean

) which is constructed by incrementally appending to a finite bit string.

Denote by M the n-th oracle Turing machine.

We use M(Z) to mean M associated to a specific oracle Z (if Z is a finite bit string, out of bounds requests return 0).

In this case, we do not change X; it is already ensured that eventually M(X) will not enumerate A, because it cannot enumerate x — indeed, if it did, this would be done after a finite number of oracle invocations, which would lie in some admissible extension Y.

Here, we know that all values enumerated by M(Y), for Y admissible extension, are in A, and conversely, every element of A is enumerated by M(Y) for at least one admissible extension Y.

In other words, A is exactly the set of all values enumerated by M(Y) for an admissible extension Y.