Frequentist inference

Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or proportion of findings in the data.

Frequentism is based on the presumption that statistics represent probabilistic frequencies.

This view was primarily developed by Ronald Fisher and the team of Jerzy Neyman and Egon Pearson.

Ronald Fisher contributed to frequentist statistics by developing the frequentist concept of "significance testing", which is the study of the significance of a measure of a statistic when compared to the hypothesis.

Neyman-Pearson extended Fisher's ideas to apply to multiple hypotheses.

This rigorously defines the confidence interval, which is the range of outcomes about which we can make statistical inferences.

Two complementary concepts in frequentist inference are the Fisherian reduction and the Neyman-Pearson operational criteria.

Together these concepts illustrate a way of constructing frequentist intervals that define the limits for

The Fisherian reduction is a method of determining the interval within which the true value of

may lie, while the Neyman-Pearson operational criteria is a decision rule about making a priori probability assumptions.

The Neyman-Pearon operational criteria is an even more specific understanding of the range of outcomes where the relevant statistic,

[3] As a point of reference, the complement to this in Bayesian statistics is the minimum Bayes risk criterion.

Because of the reliance of the Neyman-Pearson criteria on our ability to find a range of outcomes where

[4] Frequentist inferences are associated with the application frequentist probability to experimental design and interpretation, and specifically with the view that any given experiment can be considered one of an infinite sequence of possible repetitions of the same experiment, each capable of producing statistically independent results.

This is especially pertinent because the significance of a frequentist test can vary under model selection, a violation of the likelihood principle.

Frequentism is the study of probability with the assumption that results occur with a given frequency over some period of time or with repeated sampling.

As such, frequentist analysis must be formulated with consideration to the assumptions of the problem frequentism attempts to analyze.

The epistemic approach is the study of variability; namely, how often do we expect a statistic to deviate from some observed value.

[7] For concreteness, imagine trying to measure the stock market quote versus evaluating an asset's price.

For the epistemic approach, we formulate the problem as if we want to attribute probability to a hypothesis.

Frequentist statistics is conditioned not on solely the data but also on the experimental design.

[8] In frequentist statistics, the cutoff for understanding the frequency occurrence is derived from the family distribution used in the experiment design.

[9] For the epidemiological approach, the central idea behind frequentist statistics must be discussed.

This leads to the Fisherian reduction and the Neyman-Pearson operational criteria, discussed above.

When we define the Fisherian reduction and the Neyman-Pearson operational criteria for any statistic, we are assessing, according to these authors, the likelihood that the true value of the statistic will occur within a given range of outcomes assuming a number of repetitions of our sampling method.

First, the epistemic view is centered around Fisherian significance tests that are designed to provide inductive evidence against the null hypothesis,

Conversely, the epidemiological view, conducted with Neyman-Pearson hypothesis testing, is designed to minimize the Type II false acceptance errors in the long-run by providing error minimizations that work in the long-run.

The difference between the two is critical because the epistemic view stresses the conditions under which we might find one value to be statistically significant; meanwhile, the epidemiological view defines the conditions under which long-run results present valid results.

However, where appropriate, Bayesian inferences (meaning in this case an application of Bayes' theorem) are used by those employing frequency probability.

There are two major differences in the frequentist and Bayesian approaches to inference that are not included in the above consideration of the interpretation of probability: