Lindley's paradox

Lindley's paradox is a counterintuitive situation in statistics in which the Bayesian and frequentist approaches to a hypothesis testing problem give different results for certain choices of the prior distribution.

The problem of the disagreement between the two approaches was discussed in Harold Jeffreys' 1939 textbook;[1] it became known as Lindley's paradox after Dennis Lindley called the disagreement a paradox in a 1957 paper.

[2] Although referred to as a paradox, the differing results from the Bayesian and frequentist approaches can be explained as using them to answer fundamentally different questions, rather than actual disagreement between the two methods.

Nevertheless, for a large class of priors the differences between the frequentist and Bayesian approach are caused by keeping the significance level fixed: as even Lindley recognized, "the theory does not justify the practice of keeping the significance level fixed" and even "some computations by Prof. Pearson in the discussion to that paper emphasized how the significance level would have to change with the sample size, if the losses and prior probabilities were kept fixed".

[2] In fact, if the critical value increases with the sample size suitably fast, then the disagreement between the frequentist and Bayesian approaches becomes negligible as the sample size increases.

[3] The paradox continues to be a source of active discussion.

representing uncertainty as to which hypothesis is more accurate before taking into account

more diffuse, and the prior distribution does not strongly favor one or the other, as seen below.

In a certain city 49,581 boys and 48,870 girls have been born over a certain time period.

We assume the fraction of male births is a binomial variable with parameter

is to compute a p-value, the probability of observing a fraction of boys at least as large as

to compute We would have been equally surprised if we had seen 49581 female births, i.e.

so a frequentist would usually perform a two-sided test, for which the p-value would be

In both cases, the p-value is lower than the significance level α = 5%, so the frequentist approach rejects

Assuming no reason to favor one hypothesis over the other, the Bayesian approach would be to assign prior probabilities

births, we can compute the posterior probability of each hypothesis using the probability mass function for a binomial variable: where

Naaman[3] proposed an adaption of the significance level to the sample size in order to control false positives: αn, such that αn = n − r with r > 1/2.

At least in the numerical example, taking r = 1/2, results in a significance level of 0.00318, so the frequentist would not reject the null hypothesis, which is in agreement with the Bayesian approach.

If we use an uninformative prior and test a hypothesis more similar to that in the frequentist approach, the paradox disappears.

), we find If we use this to check the probability that a newborn is more likely to be a boy than a girl, i.e.

we find In other words, it is very likely that the proportion of male births is above 0.5.

Neither analysis gives an estimate of the effect size, directly, but both could be used to determine, for instance, if the fraction of boy births is likely to be above some particular threshold.

The apparent disagreement between the two approaches is caused by a combination of factors.

To understand why, it is helpful to consider the two hypotheses as generators of the observations: Most of the possible values for

In essence, the apparent disagreement between the methods is not a disagreement at all, but rather two different statements about how the hypotheses relate to the data: The ratio of the sex of newborns is improbably 50/50 male/female, according to the frequentist test.

For example, this choice of hypotheses and prior probabilities implies the statement "if

Looking at it another way, we can see that the prior distribution is essentially flat with a delta function at

In fact, picturing real numbers as being continuous, it would be more logical to assume that it would be impossible for any given number to be exactly the parameter value, i.e., we should assume

in the alternative hypothesis produces a less surprising result for the posterior of

(of course, one cannot actually use the MLE as part of a prior distribution).

Distribution of p under the null hypothesis, and the posterior distribution of p