Newcomb's paradox

However, it was first analyzed in a philosophy paper by Robert Nozick in 1969[1] and appeared in the March 1973 issue of Scientific American, in Martin Gardner's "Mathematical Games".

The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly.

The problem is considered a paradox because two seemingly logical analyses yield conflicting answers regarding which choice maximizes the player's payout.

They suggest that, in Newcomb's paradox, the debate over which strategy is 'obviously correct' stems from the fact that interpreting the problem details differently can lead to two distinct noncooperative games.

They then derive the optimal strategies for both of the games, which turn out to be independent of the predictor's infallibility, questions of causality, determinism, and free will.

[10] Gary Drescher argues in his book Good and Real that the correct decision is to take only box B, by appealing to a situation he argues is analogous – a rational agent in a deterministic universe deciding whether or not to cross a potentially busy street.

[11] Andrew Irvine argues that the problem is structurally isomorphic to Braess's paradox, a non-intuitive but ultimately non-paradoxical result concerning equilibrium points in physical systems of various kinds.

As he emphasises, however, for all practical purposes that is beside the point; the decisions "that determine what happens to the vast bulk of the money on offer all occur in the first [stage]".

[14] Burgess has stressed that – pace certain critics (e.g., Peter Slezak) – he does not recommend that players try to trick the predictor.

[15] Quite to the contrary, Burgess analyses Newcomb's paradox as a common cause problem, and he pays special attention to the importance of adopting a set of unconditional probability values – whether implicitly or explicitly – that are entirely consistent at all times.

It is also notable that Burgess highlights a similarity between Newcomb's paradox and the Kavka's toxin puzzle.

[17] Suppose we take the predictor to be a machine that arrives at its prediction by simulating the brain of the chooser when confronted with the problem of which box to choose.