The most common such robust statistics are the interquartile range (IQR) and the median absolute deviation (MAD).
These are contrasted with conventional or non-robust measures of scale, such as sample standard deviation, which are greatly influenced by outliers.
For example, dividing the IQR by 2√2 erf−1(1/2) (approximately 1.349), makes it an unbiased, consistent estimator for the population standard deviation if the data follow a normal distribution.
Mizera & Müller (2004) propose a robust depth-based estimator for location and scale simultaneously.
If the operator repeated the process only three times, simply taking the median of the three measurements and using σ would give a confidence interval.
The 200 extra weighings served only to detect and correct for operator error and did nothing to improve the confidence interval.
With more repetitions, one could use a truncated mean, discarding the largest and smallest values and averaging the rest.
In practical applications where the occasional operator error can occur, or the balance can malfunction, the assumptions behind simple statistical calculations cannot be taken for granted.
The theoretical analysis of such an experiment is complicated, but it is easy to set up a spreadsheet which draws random numbers from a normal distribution with standard deviation σ to simulate the situation; this can be done in Microsoft Excel using =NORMINV(RAND(),0,σ)), as discussed in [4] and the same techniques can be used in other spreadsheet programs such as in OpenOffice.org Calc and gnumeric.
After removing obvious outliers, one could subtract the median from the other two values for each object, and examine the distribution of the 200 resulting numbers.
A simple Monte Carlo spreadsheet calculation would reveal typical values for the standard deviation (around 105 to 115% of σ).