Consensus-based assessment expands on the common practice of consensus decision-making and the theoretical observation that expertise can be closely approximated by large numbers of novices or journeymen.
It creates a method for determining measurement standards for very ambiguous domains of knowledge, such as emotional intelligence, politics, religion, values and culture in general.
Peter Legree and Joseph Psotka, working together over the past decades, proposed that psychometric g could be measured unobtrusively through survey-like scales requiring judgments.
Legree and Psotka subsequently created scales that requested individuals to estimate word frequency; judge binary probabilities of good continuation; identify knowledge implications; and approximate employment distributions.
The items were carefully identified to avoid objective referents, and therefore the scales required respondents to provide judgments that were scored against broadly developed, consensual standards.
Performance on this judgment battery correlated approximately 0.80 with conventional measures of psychometric g. The response keys were consensually derived.
Unlike mathematics or physics questions, the selection of items, scenarios, and options to assess psychometric g were guided roughly by a theory that emphasized complex judgment, but the explicit keys were unknown until the assessments had been made: they were determined by the average of everyone's responses, using deviation scores, correlations, or factor scores.
The means of these groups’ responses can be used to create effective scoring rubrics, or measurement standards to evaluate performance.
The transposed or Q methodology factor analysis, created by William Stephenson (psychologist) brings this relationship out explicitly.
Even so, CBA techniques are routinely employed in various measures of non-traditional intelligences (e.g., practical, emotional, social, etc.).