Mean signed deviation

In statistics, the mean signed difference (MSD),[1] also known as mean signed deviation, mean signed error, or mean bias error[2] is a sample statistic that summarizes how well a set of estimates

match the quantities

that they are supposed to estimate.

It is one of a number of statistics that can be used to assess an estimation procedure, and it would often be used in conjunction with a sample version of the mean square error.

For example, suppose a linear regression model has been estimated over a sample of data, and is then used to extrapolate predictions of the dependent variable out of sample after the out-of-sample data points have become available.

would be the i-th out-of-sample value of the dependent variable, and

The mean signed deviation is the average value of

The mean signed difference is derived from a set of n pairs,

is an estimate of the parameter

In many applications, all the quantities

will share a common value.

When applied to forecasting in a time series analysis context, a forecasting procedure might be evaluated using the mean signed difference, with

being the predicted value of a series at a given lead time and

being the value of the series eventually observed for that time-point.

The mean signed difference is defined to be The mean signed difference is often useful when the estimations

are biased from the true values

If the estimator that produces the

are produced by a biased estimator, then the mean signed difference is a useful tool to understand the direction of the estimator's bias.

This statistics-related article is a stub.