Burgess selected success on parole as the target outcome, so a predictor such as a history of theft was coded as "yes" = 0 and "no" = 1.
For predictors with more than two values, the Burgess method selects a cutoff score based on subjective judgment.
As an example, a study using the Burgess method (Gottfredson & Snyder, 2005) selected as one predictor the number of complaints for delinquent behavior.
In this way, the selection of the cutoff score is based not on subjective judgment, but on a statistical criterion, such as the point where the chi-square value is a maximum.
The second difference is that while the Burgess method is applied to a binary outcome, the Kerby method can apply to a multi-valued outcome, because CART analysis can identify cutoff scores in such cases, using a criterion such as the point where the t-value is a maximum.
Because CART analysis is not only binary, but also recursive, the result can be that a predictor variable will be divided again, yielding two cutoff scores.
The standard form for each predictor is that a score of one is added when CART analysis creates a partition.
When the CART analysis yielded one partition, the result was like the Burgess method in that the predictor was coded as either zero or one.
With this method of unit-weighted regression, the variate is a sum of the z-scores (e.g., Dawes, 1979; Bobko, Roth, & Buster, 2007).
The mathematical issues involved in unit-weighted regression were first discussed in 1938 by Samuel Stanley Wilks, a leading statistician who had a special interest in multivariate analysis.
But the school may have no money to gather data and conduct a standard multiple regression analysis.
His results showed that Wilks was indeed correct and that unit weights tend to perform well in simulations of practical studies.
Jacob Cohen also discussed the value of unit weights and noted their practical utility.
The outcome of interest was suicidal thinking, and the predictor variables were broad personality traits.
Andreas Graefe applied an equal weighting approach to nine established multiple regression models for forecasting U.S. presidential elections.
Across the ten elections from 1976 to 2012, equally weighted predictors reduced the forecast error of the original regression models on average by four percent.
Previous research had made use of multiple regression; with this method, it is natural to look for the best predictor, the one with the highest beta weight.
Bry and colleagues noted that one previous study had found that early use of alcohol was the best predictor.
In this case, models are not specified and the estimates for the beta weights suffer from omitted variable bias.