[3][4][5] The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation.
A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish.
It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation.
The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.
[citation needed] As Feller notes:[a] There is no place in our system for speculations concerning the probability that the sun will rise tomorrow.
[15][c][16] Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson.
In this view, Poisson's contribution was his sharp criticism of the alternative "inverse" (subjective, Bayesian) probability interpretation.
Major contributors to "classical" statistics in the early 20th century included Fisher, Neyman, and Pearson.
Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of "repeated sampling from the same population";[17] Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing.
Fisher said, "... the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected.
[22][23] Kendall observed "The Frequency Theory of Probability" was used a generation earlier as a chapter title in Keynes (1921).