Cook's distance

In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis.

[1] In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it would be good to be able to obtain more data points.

It is named after the American statistician R. Dennis Cook, who introduced the concept in 1977.

[2][3] Data points with large residuals (outliers) and/or high leverage may distort the outcome and accuracy of a regression.

Cook's distance measures the effect of deleting a given observation.

Points with a large Cook's distance are considered to merit closer examination in the analysis.

is the number of covariates or predictors for each observation, and

is defined as the sum of all the changes in the regression model when observation

is removed from it[5] where p is the rank of the model (i.e., number of independent variables in the design matrix) and

is the fitted response value obtained when excluding

is the mean squared error of the regression model.

): There are different opinions regarding what cut-off values to use for spotting highly influential points.

-th observation, has a covariance matrix of rank one and therefore it is distributed entirely over one dimensional subspace (a line, say

However, in the introduction of Cook’s distance, a scaling matrix of full rank

is treated as if it is a random vector distributed over the whole space of

Hence the Cook's distance measure is likely to distort the real influence of observations, misleading the right identification of influential observations.

is not exactly on the regression line that was fitted without observation

can be interpreted as the distance one's estimates move within the confidence ellipsoid that represents a region of plausible values for the parameters.

[clarification needed] This is shown by an alternative but equivalent representation of Cook's distance in terms of changes to the estimates of the regression parameters between the cases, where the particular observation is either included or excluded from the regression analysis.

Instead of considering the influence a single observation has on the overall model, the statistics

Again, the projection matrix is involved in the calculation to obtain the required weights: In this context,

is asymptotically normal for large sample sizes and models with many predictors.

-values within the original data set, i.e., a robust measure of location and a robust measure of scale for the distribution of

was found to perform well for high- and intermediate-leverage outliers, even in presence of masking effects for which

are closely related because they can both be expressed in terms of the matrix

extracts the main diagonal of a square matrix

- which both share the same eigenvalues – serves as a tool in outlier detection, although the eigenvectors of the sensitivity matrix are more powerful.

[13] Many programs and statistics packages, such as R, Python, Julia, etc., include implementations of Cook's distance.

High-dimensional Influence Measure (HIM) is an alternative to Cook's distance for when

[14] While the Cook's distance quantifies the individual observation's influence on the least squares regression coefficient estimate, the HIM measures the influence of an observation on the marginal correlations.