Dunnett's test

In statistics, Dunnett's test is a multiple comparison procedure[1] developed by Canadian statistician Charles Dunnett[2] to compare each of a number of treatments with a single control.

Dunnett's test was developed in 1955;[5] an updated table of critical values was published in 1964.

The major issue in any discussion of multiple-comparison procedures is the question of the probability of Type I errors.

The problem is in part technical; but it is really much more a subjective question of how you want to define the error rate and how large you are willing to let the maximum possible error rate be.

[7] Dunnett's test are well known and widely used in multiple comparison procedure for simultaneously comparing, by interval estimation or hypothesis testing, all active treatments with a control when sampling from a distribution where the normality assumption is reasonable.

Dunnett's test is designed to hold the family-wise error rate at or below

when performing multiple comparisons of treatment group with control.

[7] The original work on Multiple Comparisons problem was made by Tukey and Scheffé.

Their method was a general one, which considered all kinds of pairwise comparisons.

[7] Tukey's and Scheffé's methods allow any number of comparisons among a set of sample means.

In the general case, where we compare each of the pairs, we make

comparisons (where k is the number of groups), but in the treatment vs. controls case we will make only

If in the case of treatment and control groups we were to use the more general Tukey's and Scheffé's methods, they can result in unnecessarily wide confidence intervals.

Dunnett's test takes into consideration the special structure of comparing treatment against control, yielding narrower confidence intervals.

[5] It is very common to use Dunnett's test in medical experiments, for example comparing blood count measurements on three groups of animals, one of which served as a control while the other two were treated with two different drugs.

In particular, the t-statistics are all derived from the same estimate of the error variance which is obtained by pooling the sums of squares for error across all (treatment and control) groups.

In Dunnett's test we can use a common table of critical values, but more flexible options are nowadays readily available in many statistics packages.

The critical values for any given percentage point depend on: whether a one- or- two-tailed test is performed; the number of groups being compared; the overall number of trials.

is an independent estimate of the common standard deviation of all

sets of observations are assumed to be independently and normally distributed with a common variance

When calculating one sided upper (or lower) confidence interval for the true value of the difference between the mean of the treatment and the control group,

constitutes the probability that this actual value will be less than the upper (or greater than the lower) limit of that interval.

constitutes the probability that the true value will be between the upper and the lower limits.

As mentioned before, we would like to obtain separate confidence limits for each of the differences

, which follows the Student's t-statistic distribution with n degrees of freedom.

[5] An updated table of critical values was published in 1964.

[6] The following example was adapted from one given by Villars and was presented in Dunnett's original paper.

[5] The data represent measurements on the breaking strength of fabric treated by three different chemical processes compared with a standard method of manufacture.

[10] Dunnett's Test can be calculated by applying the following steps: 1.

: 6. the quantity which must be added to and/or subtracted from the observed differences between the means to give their confidence limits is denoted as