The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance.
In such systems, a given type of event (such as a die yielding a six) tends to occur at a persistent rate, or "relative frequency", in a long run of trials.
On most accounts, evidential probabilities are considered to be degrees of belief, defined in terms of dispositions to gamble at certain odds.
[6] Some interpretations of probability are associated with approaches to statistical inference, including theories of estimation and hypothesis testing.
The physical interpretation, for example, is taken by followers of "frequentist" statistical methods, such as Ronald Fisher[dubious – discuss], Jerzy Neyman and Egon Pearson.
Statisticians of the opposing Bayesian school typically accept the frequency interpretation when it makes sense (although not as a definition), but there is less agreement regarding physical probabilities.
Those who promote Bayesian inference view "frequentist statistics" as an approach to statistical inference that is based on the frequency interpretation of probability, usually relying on the law of large numbers and characterized by what is called 'Null Hypothesis Significance Testing' (NHST).
But, as to what probability is and how it is connected with statistics, there has seldom been such complete disagreement and breakdown of communication since the Tower of Babel.
Doubtless, much of the disagreement is merely terminological and would disappear under sufficiently sharp analysis.The philosophy of probability presents problems chiefly in matters of epistemology and the uneasy interface between mathematical concepts and ordinary language as it is used by non-mathematicians.
It has its origins in correspondence discussing the mathematics of games of chance between Blaise Pascal and Pierre de Fermat in the seventeenth century,[15] and was formalized and rendered axiomatic as a distinct branch of mathematics by Andrey Kolmogorov in the twentieth century.
Thomas Bayes attempted to provide a logic that could handle varying degrees of confidence; as such, Bayesian probability is an attempt to recast the representation of probabilistic statements as an expression of the degree of confidence by which the beliefs they express are held.
Though probability initially had somewhat mundane motivations, its modern influence and use is widespread ranging from evidence-based medicine, through six sigma, all the way to the probabilistically checkable proof and the string theory landscape.
The first attempt at mathematical rigour in the field of probability, championed by Pierre-Simon Laplace, is now known as the classical definition.
The ratio of this number to that of all the cases possible is the measure of this probability, which is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible.This can be represented mathematically as follows: If a random experiment can result in N mutually exclusive and equally likely outcomes and if NA of these outcomes result in the occurrence of the event A, the probability of A is defined by There are two clear limitations to the classical definition.
But some important random experiments, such as tossing a coin until it shows heads, give rise to an infinite set of outcomes.
And secondly, it requires an a priori determination that all possible outcomes are equally likely without falling in a trap of circular reasoning by relying on the notion of probability.
In the case of tossing a fair coin, frequentists say that the probability of getting a heads is 1/2, not because there are two equally likely outcomes but because repeated series of large numbers of trials demonstrate that the empirical frequency converges to the limit 1/2 as the number of trials goes to infinity.
This renders even the frequency definition circular; see for example “What is the Chance of an Earthquake?”[21] Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation.
Bayesians point to the work of Ramsey[10] (p 182) and de Finetti[8] (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent.
[26][27][28][29] A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of C. S. Peirce, however.
A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's.
Consider, for example, the claim that the extinction of the dinosaurs was probably caused by a large meteorite hitting the earth.
Statements such as "Hypothesis H is probably true" have been interpreted to mean that the (presently available) empirical evidence (E, say) supports H to a high degree.
This degree of support of H by E has been called the logical, or epistemic, or inductive probability of H given E. The differences between these interpretations are rather small, and may seem inconsequential.
This was the main function of probability before the 20th century,[31] but fell out of favor compared to the parametric approach, which modeled phenomena as a physical system that was observed with error, such as in celestial mechanics.
[31] This view came to the attention of the Anglophone world with the 1974 translation of de Finetti's book,[31] and has since been propounded by such statisticians as Seymour Geisser.