Relative likelihood

Assume that we are given some data x for which we have a statistical model with parameter θ.

Suppose that the maximum likelihood estimate for θ is

In terms of percentages, a p% likelihood region for θ is defined to be.

Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism).

If θ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for θ will be the same as a 95% confidence interval (19/20 coverage probability).

[1][6] In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df-s between the two models (therefore, the e−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df-s to be 1).

[6][7] The definition of relative likelihood can be generalized to compare different statistical models.

[8] To see that this is a generalization of the earlier definition, suppose that we have some model M with a (possibly multivariate) parameter θ.