In statistics, the score (or informant[1]) is the gradient of the log-likelihood function with respect to the parameter vector.
If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.
Since the score is a function of the observations, which are subject to sampling error, it lends itself to a test statistic known as score test in which the parameter is held at a particular value.
, the natural logarithm of the likelihood function, with respect to an m-dimensional parameter vector
This convention arises from a time when the primary parameter of interest was the mean or median of a distribution.
In this case, the likelihood of an observation is given by a density of the form[clarification needed]
Under certain regularity conditions on the density functions of the random variables,[3][4] the expected value of the score, evaluated at the true parameter value
Then: The assumed regularity conditions allow the interchange of derivative and integral (see Leibniz integral rule), hence the above expression may be rewritten as[clarification needed] It is worth restating the above result in words: the expected value of the score, at true parameter value
Hence the variance of the score is equal to the negative expected value of the Hessian matrix of the log-likelihood.
Note that the Fisher information is not a function of any particular observation, as the random variable
This concept of information is useful when comparing two methods of observation of some random process.
[6] The scoring algorithm is an iterative method for numerically determining the maximum likelihood estimator.
Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error.
In 1948, C. R. Rao first proved that the square of the score divided by the information matrix follows an asymptotic χ2-distribution under the null hypothesis.
[8] The term "score function" may initially seem unrelated to its contemporary meaning, which centers around the derivative of the log-likelihood function in statistical models.
This apparent discrepancy can be traced back to the term's historical origins.
The concept of the "score function" was first introduced by British statistician Ronald Fisher in his 1935 paper titled "The Detection of Linkage with 'Dominant' Abnormalities.
Over time, the application and meaning of the "score function" have evolved, diverging from its original context but retaining its foundational principles.
He categorized the children of such parents into four classes based on two binary traits: whether they had inherited the abnormality or not, and their zygosity status as either homozygous or heterozygous.
Fisher devised a method to assign each family a "score," calculated based on the number of children falling into each of the four categories.
This score was used to estimate what he referred to as the "linkage parameter," which described the probability of the genetic abnormality being inherited.
The ideal score was defined as the derivative of the logarithm of the sampling density, as mentioned on page 193 of his work.
[9] The term "score" later evolved through subsequent research, notably expanding beyond the specific application in genetics that Fisher had initially addressed.
Various authors adapted Fisher's original methodology to more generalized statistical contexts.
In these broader applications, the term "score" or "efficient score" started to refer more commonly to the derivative of the log-likelihood function of the statistical model in question.
This conceptual expansion was significantly influenced by a 1948 paper by C. R. Rao, which introduced "efficient score tests" that employed the derivative of the log-likelihood function.
[12] Thus, what began as a specialized term in the realm of genetic statistics has evolved to become a fundamental concept in broader statistical theory, often associated with the derivative of the log-likelihood function.