[1] Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates.
[4] Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference.
[7] Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn.
[11] Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference.
"[13] In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population.
[28][29][30][31][32] Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures.
[34][35]) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena.
[40] Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification.
The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.
Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean,
[44] This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand.
By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.
The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions.
[citation needed] In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions.
Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions.
Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior.
(Methods of prior construction which do not require external input have been proposed but not yet fully developed.)
Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data.
The process of likelihood-based inference usually involves the following steps: The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data.
[50] The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches.
[50] The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory.
In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious.
An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities.
Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability,[citation needed] but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti.
The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics).
De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper,[63] and has since been propounded by such statisticians as Seymour Geisser.