In probability theory and statistics, the definition of variance is either the expected value of the SDM (when considering a theoretical distribution) or its average value (for actual experimental data).
Computations for analysis of variance involve the partitioning of a sum of SDM.
An understanding of the computations involved is greatly enhanced by a study of the statistical value For a random variable
Therefore, From the above, the following can be derived: The sum of squared deviations needed to calculate sample variance (before deciding whether to divide by n or n − 1) is most easily calculated as From the two derived expectations above the expected value of this sum is which implies This effectively proves the use of the divisor n − 1 in the calculation of an unbiased sample estimate of σ2.
It is now possible to calculate three sums of squares: Under the null hypothesis that the treatments cause no differences and all the
are zero, the expectation simplifies to Under the null hypothesis, the difference of any pair of I, T, and C does not contain any dependency on
The constants (n − 1), (k − 1), and (n − k) are normally referred to as the number of degrees of freedom.