In probability theory and statistics, the mathematical concepts of covariance and correlation are very similar.
If X and Y are two random variables, with means (expected values) μX and μY and standard deviations σX and σY, respectively, then their covariance and correlation are as follows: so that
More generally, the correlation between two variables is 1 (or –1) if one of them always takes on a value that is given exactly by a linear function of the other with respectively a positive (or negative) slope.
Although the values of the theoretical covariances and correlations are linked in the above way, the probability distributions of sample estimates of these quantities are not linked in any simple way and they generally need to be treated separately.
In the case of a time series which is stationary in the wide sense, both the means and variances are constant over time (E(Xn+m) = E(Xn) = μX and var(Xn+m) = var(Xn) and likewise for the variable Y).
In this case the cross-covariance and cross-correlation are functions of the time difference: If Y is the same variable as X, the above expressions are called the autocovariance and autocorrelation: