[5] "Gold standard" can refer to popular clinical endpoints by which scientific evidence is evaluated.
For example, in resuscitation research, the "gold standard" test of a medication or procedure is whether or not it leads to an increase in the number of neurologically intact survivors that walk out of the hospital.
[7] In practice, however, the uptake of this term by authors, as well as enforcement by editorial staff, is notably poor, at least for AMA journals.
[9] A hypothetical ideal "gold standard" test has a sensitivity of 100% concerning the presence of the disease (it identifies all individuals with a well-defined disease process; it does not have any false-negative results) and a specificity of 100% (it does not falsely identify someone with a condition that does not have the condition; it does not have any false-positive results).
[10] Sometimes a test becomes popular and is declared to be the gold standard without adequate consideration of alternatives or despite weaknesses.
[citation needed] When the gold standard is not a perfect one, its sensitivity and specificity must be calibrated against more accurate tests or against the definition of the condition.
Other times, the "gold standard" does not refer to the best-performing test available, but the best available under reasonable conditions.
Claassen argues this usage is incorrect, as "golden standard" implies a level of perfection that is unattainable in medical science.