[17] This technique for testing the statistical significance of results was developed in the early 20th century.
[20] Statistical significance dates to the 18th century, in the work of John Arbuthnot and Pierre-Simon Laplace, who computed the p-value for the human sex ratio at birth, assuming a null hypothesis of equal probability of male and female births; see p-value § History for details.
[28][29][30] Fisher suggested a probability of one in twenty (0.05) as a convenient cutoff level to reject the null hypothesis.
[31] In a 1933 paper, Jerzy Neyman and Egon Pearson called this cutoff the significance level, which they named
In his 1956 publication Statistical Methods and Scientific Inference, he recommended that significance levels be set according to specific circumstances.
To determine whether a result is statistically significant, a researcher calculates a p-value, which is the probability of observing an effect of the same magnitude or more extreme given that the null hypothesis is true.
is also called the significance level, and is the probability of rejecting the null hypothesis given that it is true (a type I error).
is set to 5%, the conditional probability of a type I error, given that the null hypothesis is true, is 5%,[37] and a statistically significant result is one where the observed p-value is less than (or equal to) 5%.
The use of a one-tailed test is dependent on whether the research question or alternative hypothesis specifies a direction such as whether a group of objects is heavier or the performance of students on an assessment is better.
In specific fields such as particle physics and manufacturing, statistical significance is often expressed in multiples of the standard deviation or sigma (σ) of a normal distribution, with significance thresholds set at a much stricter level (for example 5σ).
[41][42] For instance, the certainty of the Higgs boson particle's existence was based on the 5σ criterion, which corresponds to a p-value of about 1 in 3.5 million.
[42][43] In other fields of scientific research such as genome-wide association studies, significance levels as low as 5×10−8 are not uncommon[44][45]—as the number of tests performed is extremely large.
Researchers focusing solely on whether their results are statistically significant might report findings that are not substantive[46] and not replicable.
[51] Starting in the 2010s, some journals began questioning whether significance testing, and particularly using a threshold of α=5%, was being relied on too heavily as the primary measure of validity of a hypothesis.
There is nothing wrong with hypothesis testing and p-values per se as long as authors, reviewers, and action editors use them correctly.
"[56] Some statisticians prefer to use alternative measures of evidence, such as likelihood ratios or Bayes factors.
[58] The widespread abuse of statistical significance represents an important topic of research in metascience.
[59] In 2016, the American Statistical Association (ASA) published a statement on p-values, saying that "the widespread use of 'statistical significance' (generally interpreted as 'p ≤ 0.05') as a license for making a claim of a scientific finding (or implied truth) leads to considerable distortion of the scientific process".
[57] In 2017, a group of 72 authors proposed to enhance reproducibility by changing the p-value threshold for statistical significance from 0.05 to 0.005.
[62] Additionally, the change to 0.005 would increase the likelihood of false negatives, whereby the effect being studied is real, but the test fails to show it.
In 2019, over 800 statisticians and scientists signed a message calling for the abandonment of the term "statistical significance" in science,[64] and the ASA published a further official statement [65] declaring (page 2): We conclude, based on our review of the articles in this special issue and the broader literature, that it is time to stop using the term "statistically significant" entirely.