Standard Deviation

Statistical term that measures the variation of a range of values from the mean.

http://wiki.ppr.couchone.com/page/StandardDeviation#/

Given a selection of numbers, the mean or average tells you roughly how big they all are, and the standard deviation tells you whether they're tightly grouped or spread out widely.

To calculate the standard deviation first find the average, then for each number in your sample write down the difference between it and the average. Now take the squares of these differences, and take the square root of their average. In other words, the standard deviation is the square root of the average of the squares of the deviations from the average. (Technically, when working with a specific set of values, a sample, you should divide by N-1 rather than N when taking the "average". This is a minor detail until it becomes important 8-)

The main (and most useful) properties are that, roughly speaking (specifically, for the Normal Distribution), two thirds of samples lie within one standard deviation of their average, 95% lie within two standard deviations, 99.7% lie within three standard deviations, and 99.99% within four. It also gets severely affected by significant outlying points.

Actually, this is not precisely true. The percentages above refer to the portion of a normal distribution curve (a pure theoretical construct) outside of various ranges. If you use samples (even rather large samples), you will usually see all samples lying within 3 standard deviations. This is why points lying outside of 3 standard deviations are considered statistically significant.


About the difference between N and N-1 - if somehow you know in advance the mean of a population, you calculate the s.d. of a sample by dividing by N. However, if you use the same sample to compute the mean and the s.d. then you are using the same data for two purposes and the results get skewed, or biased. To avoid this bias it suffices to compute the s.d. with a division by N-1.

That's not very clear. How on earth would you know the population mean in advance (without also knowing the s.d. of the population)? Are you suggesting that if you do, you use it instead of the sample mean in the formula for the s.d. of the sample?

Moreover, in "real life", when does one use samples so small that using N - 1 instead of N makes a significant difference? The SixSigma page has a real life example.

BTW, what are the corresponding methods when the distribution is expected not to be normal?


While it's true that WikiIsNotaDictionary, this concept is very useful in many contexts, and the above gives a slightly different slant on the idea as compared with the definitions elsewhere.

See http://mathworld.wolfram.com/StandardDeviation.html for much much more information.


SixSigma is a quality metric measured in StandardDeviations.

The SixSigma page has examples of the use of StandardDeviations.


See DistributionOfAllStatistics - the scale at the bottom marks the StandardDeviations from the central average.

CategoryStatistics SixSigma


EditText of this page (last edited August 6, 2014) or FindPage with title or text search