Dixon's Q test

This assumes normal distribution and per Robert Dean and Wilfrid Dixon, and others, this test should be used sparingly and never more than once in a data set.

To apply a Q test for bad data, arrange the data in order of increasing values and calculate Q as defined: Where gap is the absolute difference between the outlier in question and the closest number to it.

Consider the data set: Now rearrange in increasing order: We hypothesize that 0.167 is an outlier.

McBane[1] notes: Dixon provided related tests intended to search for more than one outlier, but they are much less frequently used than the r10 or Q version that is intended to eliminate a single outlier.

This table summarizes the limit values of the two-tailed Dixon's Q test.