An element of subjectivity exists in relation to determining content validity, which requires a degree of agreement about what a particular personality trait such as extraversion represents.
A disagreement about a personality trait will prevent the gain of a high content validity.
Content validity requires the use of recognized subject matter experts to evaluate whether test items assess defined content and more rigorous statistical tests than does the assessment of face validity.
Content validity is most often addressed in academic and vocational testing, where test items need to reflect the knowledge actually required for a given topic area (e.g., history) or job skill (e.g., accounting).
In an article regarding pre-employment testing, Lawshe (1975) [2] proposed that each of the subject matter expert raters (SMEs) on the judging panel respond to the following question for each item: "Is the skill or knowledge measured by this item 'essential,' 'useful, but not essential,' or 'not necessary' to the performance of the job?"
Greater levels of content validity exist as larger numbers of panelists agree that a particular item is essential.
Using these assumptions, Lawshe developed a formula termed the content validity ratio:
Lawshe (1975) provided a table of critical values for the CVR by which a test evaluator could determine, for a pool of SMEs of a given size, the size of a calculated CVR necessary to exceed chance expectation.
However, when applying the formula to 8 raters, the result from 7 Essential and 1 other rating yields a CVR of .75.
Wilson, Pan & Schumsky (2012), seeking to correct the error, found no explanation in Lawshe's writings nor any publications by Schipper describing how the table of critical values was computed.