Kendall's W

It is a normalization of the statistic of the Friedman test, and can be used for assessing agreement among raters and in particular inter-rater reliability.

Suppose, for instance, that a number of people have been asked to rank a list of political concerns, from the most important to the least important.

If the test statistic W is 1, then all the survey respondents have been unanimous, and each respondent has assigned the same order to the list of concerns.

If W is 0, then there is no overall trend of agreement among the respondents, and their responses may be regarded as essentially random.

Intermediate values of W indicate a greater or lesser degree of unanimity among the various responses.

While tests using the standard Pearson correlation coefficient assume normally distributed values and compare two sequences of outcomes simultaneously, Kendall's W makes no assumptions regarding the nature of the probability distribution and can handle any number of distinct outcomes.

Then the total rank given to object i is and the mean value of these total ranks is The sum of squared deviations, S, is defined as and then Kendall's W is defined as[1] If the test statistic W is 1, then all the judges or survey respondents have been unanimous, and each judge or respondent has assigned the same order to the list of objects or concerns.

If W is 0, then there is no overall trend of agreement among the respondents, and their responses may be regarded as essentially random.

Intermediate values of W indicate a greater or lesser degree of unanimity among the various judges or respondents.

Kendall and Gibbons (1990) also show W is linearly related to the mean value of the Spearman's rank correlation coefficients between all

possible pairs of rankings between judges When the judges evaluate only some subset of the n objects, and when the correspondent block design is a (n, m, r, p, λ)-design (note the different notation).

so that each judge ranks all n objects, the formula above is equivalent to the original one.

For example, the data set {80,76,34,80,73,80} has values of 80 tied for 4th, 5th, and 6th place; since the mean of {4,5,6} = 5, ranks would be assigned to the raw data values as follows: {5,3,1,5,2,5}.

To correct for ties, assign ranks to tied values as above and compute the correction factors where ti is the number of tied ranks in the ith group of tied ranks, (where a group is a set of values having constant (tied) rank,) and gj is the number of groups of ties in the set of ranks (ranging from 1 to n) for judge j.

Thus, Tj is the correction factor required for the set of ranks for judge j, i.e. the jth set of ranks.

Note that if there are no tied ranks for judge j, Tj equals 0.

With the correction for ties, the formula for W becomes where Ri is the sum of the ranks for object i, and

is the sum of the values of Tj over all m sets of ranks.

[3] In some cases, the importance of the raters (experts) might not be the same as each other.

(in real-world situation, the importance of each rater can be different).

is and the mean value of these total ranks is, The sum of squared deviations,

In case of tie rank, we need to consider it in the above formula.

represents the number of tie ranks in judge

shows the total number of ties in judge

[4] In the case of complete ranks, a commonly used significance test for W against a null hypothesis of no agreement (i.e. random rankings) is given by Kendall and Gibbons (1990)[5] Where the test statistic takes a chi-squared distribution with

Legendre[6] compared via simulation the power of the chi-square and permutation testing approaches to determining significance for Kendall's W. Results indicated the chi-square method was overly conservative compared to a permutation test when

Marozzi[7] extended this by also considering the F test, as proposed in the original publication introducing the W statistic by Kendall & Babington Smith (1939): Where the test statistic follows an F distribution with