In probability theory, Robbins' problem of optimal stopping[1], named after Herbert Robbins, is sometimes referred to as the fourth secretary problem or the problem of minimizing the expected rank with full information.
[2] Let X1, ... , Xn be independent, identically distributed random variables, uniform on [0, 1].
We observe the Xk's sequentially and must stop on exactly one of them.
What stopping rule minimizes the expected rank of the selected observation, and what is its corresponding value?The general solution to this full-information expected rank problem is unknown.
The major difficulty is that the problem is fully history-dependent, that is, the optimal rule depends at every stage on all preceding values, and not only on simpler sufficient statistics of these.
Only bounds are known for the limiting value v as n goes to infinity, namely 1.908 < v < 2.329.
These bounds are obtained by studying so-called memoryless strategies, that is strategies in which the decision to stop on $X_k$ depends only on the value of $X_k$ and not on the history of observations $X_1, \cdots, X_{k-1}$.
It is known that there is some room to improve the lower bound by further computations for a truncated version of the problem within the class of memoryless stategeies.
It is still not known how to improve on the upper bound for the limiting value, and this for whatever strategy.
[3][4][5] Another attempt proposed to make progress on the problem is a continuous time version of the problem where the observations follow a Poisson arrival process of homogeneous rate 1.
is bounded and Lipschitz continuous, and the differential equation for this value function is derived.
presents the solution of Robbins’ problem.
The advantage of the continuous time version lies in the fact that the answer can be expressed in terms of the solution of a differential equation, i.e. the answer appears in a closed form.
However, since the obtained differential equation contains, apart from the "objective function", another (small) unknown function, the approach does not seem so far to give a decisive advantage for finding the optimal limiting value.
is the relative rank of the ith observation and n is the total number of items.
A curtailed version thereof can be used to select an item with a given probability
But the major reason is to understand how to cope with full history dependence in a (deceptively easy-looking) problem.
On the Ester's Book International Conference in Israel (2006) Robbins' problem was accordingly named one of the four most important problems in the field of optimal stopping and sequential analysis.
Herbert Robbins presented the above described problem at the International Conference on Search and Selection in Real Time[note 1] in Amherst, 1990.
He concluded his address with the words I should like to see this problem solved before I die.
Another optimal stopping problem bearing Robbins' name (and not to be c onfused with Robbins' problem) is the Chow–Robbins game:[8][9]Given an infinite sequence of IID random variables
, how to decide when to stop, in order to maximize the sample average
with finite second moment, there exists an optimal strategy, defined by a sequence of numbers
has finite second moment, then after subtracting the mean and dividing by the standard deviation, we get a distribution with mean zero and variance one.
which can be proved by solving the same problem with continuous time, with a Wiener process.
When n is small, the asymptotic bound does not apply, and finding the value of
are fair coin tosses, is not fully solved.
For the fair coin toss, a strategy is a binary decision: after
tosses, with k heads and (n-k) tails, should one continue or should one stop?
, and it found an almost always optimal decision rule, of stopping as soon as