In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors)[1] states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero.
[2] The errors do not need to be normal, nor do they need to be independent and identically distributed (only uncorrelated with mean zero and homoscedastic with finite variance).
as sample responses, are observable, the following statements and arguments including assumptions, proofs and the others assume under the only condition of knowing
(Since we are considering the case in which all the parameter estimates are unbiased, this mean squared error is the same as the variance of the linear combination.)
This is equivalent to the condition that is a positive semi-definite matrix for every other linear unbiased estimator
Proof that the OLS indeed minimizes the sum of squares of residuals may proceed as follows with a calculation of the Hessian matrix and showing that it is positive definite.
is a positive semidefinite matrix is equivalent to the property that the best linear unbiased estimator of
The generalized least squares (GLS), developed by Aitken,[5] extends the Gauss–Markov theorem to the case where the error vector has a non-scalar covariance matrix.
In most treatments of OLS, the regressors (parameters of interest) in the design matrix
This assumption is considered inappropriate for a predominantly nonexperimental science like econometrics.
The independent variables can take non-linear forms as long as the parameters are linear.
An equation with a parameter dependent on an independent variable does not qualify as linear, for example
Data transformations are often used to convert an equation into a linear form.
For example, the Cobb–Douglas function—often used in economics—is nonlinear: But it can be expressed in linear form by taking the natural logarithm of both sides:[8] This assumption also covers specification issues: assuming that the proper functional form has been selected and there are no omitted variables.
This assumption is violated if the explanatory variables are measured with error, or are endogenous.
[10] Endogeneity can be the result of simultaneity, where causality flows back and forth between both the dependent and independent variable.
Instrumental variable techniques are commonly used to address this problem.
A violation of this assumption is perfect multicollinearity, i.e. some explanatory variables are linearly dependent.
[11] Multicollinearity (as long as it is not "perfect") can be present resulting in a less efficient, but still unbiased estimate.
The estimates will be less precise and highly sensitive to particular sets of data.
[12] Multicollinearity can be detected from condition number or the variance inflation factor, among other tests.
This implies the error term has uniform variance (homoscedasticity) and no serial correlation.
The term "spherical errors" will describe the multivariate normal distribution: if
is the formula for a ball centered at μ with radius σ in n-dimensional space.
[14] Heteroskedasticity occurs when the amount of error is correlated with an independent variable.
Autocorrelation can be visualized on a data plot when a given observation is more likely to lie above a fitted line if adjacent observations also lie above the fitted regression line.
If a dependent variable takes a while to fully absorb a shock.
Spatial autocorrelation can also occur geographic areas are likely to have similar errors.
Autocorrelation may be the result of misspecification such as choosing the wrong functional form.
When the spherical errors assumption may be violated, the generalized least squares estimator can be shown to be BLUE.