Chapman–Robbins bound

In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter.

It is a generalization of the Cramér–Rao bound; compared to the Cramér–Rao bound, it is both tighter and applicable to a wider range of problems.

However, it is usually more difficult to compute.

The bound was independently discovered by John Hammersley in 1950,[1] and by Douglas Chapman and Herbert Robbins in 1951.

be the set of parameters for a family of probability distributions

Then: Theorem — Given any scalar random variable

μ

μ

A generalization to the multivariable case is:[3] Theorem — Given any multivariate random variable

μ

μ

By the variational representation of chi-squared divergence:[3]

Switch the denominator and the left side and take supremum over

to obtain the single-variate case.

For the multivariate case, we define

in the variational representation to obtain:

{\displaystyle \chi ^{2}(\mu _{\theta '};\mu _{\theta })\geq {\frac {(E_{\theta '}[h]-E_{\theta }[h])^{2}}{\operatorname {Var} _{\theta }[h]}}={\frac {\langle v,E_{\theta '}[{\hat {g}}]-E_{\theta }[{\hat {g}}]\rangle ^{2}}{v^{T}\operatorname {Cov} _{\theta }[{\hat {g}}]v}}}

, using the linear algebra fact that

, we obtain the multivariate case.

is the sample space of

independent draws of a

-valued random variable

parameterized family of probability distributions,

-fold product measure, and

, the expression inside the supremum in the Chapman–Robbins bound converges to the Cramér–Rao bound of

, assuming the regularity conditions of the Cramér–Rao bound hold.

This implies that, when both bounds exist, the Chapman–Robbins version is always at least as tight as the Cramér–Rao bound; in many cases, it is substantially tighter.

The Chapman–Robbins bound also holds under much weaker regularity conditions.

For example, no assumption is made regarding differentiability of the probability density function p(x; θ) of

When p(x; θ) is non-differentiable, the Fisher information is not defined, and hence the Cramér–Rao bound does not exist.