More precisely, the tests are a form of model selection, and can be interpreted several ways, depending on one's interpretations of probability: A normality test is used to determine whether sample data has been drawn from a normally distributed population (within some tolerance).
A number of statistical tests, such as the Student's t-test and the one-way and two-way ANOVA, require a normally distributed sample population.
In this case one might proceed by regressing the data against the quantiles of a normal distribution with the same mean and variance as the sample.
Lack of fit to the regression line suggests a departure from normality (see Anderson Darling coefficient and minitab).
This test is useful in cases where one faces kurtosis risk – where large deviations matter – and has the benefits that it is very easy to compute and to communicate: non-statisticians can easily grasp that "6σ events are very rare in normal distributions".
[5] Historically, the third and fourth standardized moments (skewness and kurtosis) were some of the earliest tests for normality.
[14] Spiegelhalter suggests using a Bayes factor to compare normality with a different class of distributional alternatives.