Fisher consistency

[1] Suppose we have a statistical sample X1, ..., Xn where each Xi follows a cumulative distribution Fθ which depends on an unknown parameter θ.

The resulting estimator will have the same expected value as T and its variance will be no larger than that of T. If the strong law of large numbers can be applied, the empirical distribution functions F̂n converge pointwise to Fθ, allowing us to express Fisher consistency as a limit — the estimator is Fisher consistent if Suppose our sample is obtained from a finite population Z1, ..., Zm.

Suppose the parameter of interest is the expected value μ and the estimator is the sample mean, which can be written where I is the indicator function.

Maximising the likelihood function L gives an estimate that is Fisher consistent for a parameter b if where b0 represents the true value of b.

A loss function is Fisher consistent if the population minimizer of the risk leads to the Bayes optimal decision rule.