Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where Monte Carlo simulations are run as estimates of an actual system,[2][3] and problems where there is experimental (random) error in the measurements of the criterion.
In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use statistical inference tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps.
Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics.
Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with almost certain good performance uniformly across many data sets, for many sorts of problems.
[21] Fred W. Glover[22] argues that reliance on random elements may prevent the development of more intelligent and better deterministic components.