Skip to main content

Standard Deviation

Standard Deviation   

STANDARD DEVIATION

The standard deviation is a statistic that measures the dispersion of a dataset relative to its mean and is calculated as the square root of the variance. It is calculated as the square root of variance by determining the variation between each data point relative to the mean. If the data points are further from the mean, there is higher deviation within the data set; thus, the more spread out the data, the higher the standard deviation.

Calculating  Standard Deviation

The formula for standard deviation uses three variables. The first variable is to be the value of each point within the data set, traditionally listed as x, with a sub-number denoting each additional variable (x, x1, x2, x3, etc.). The mean, or average, of the data points is applied to the value of the variable M, and the number of data points involved is assigned to the variable n.

To determine the mean value, you must add the values of the data points together, and then divide that total by the number of data points included. For example, if the data points were 5, 7, 3 and 7, the total would be 22. You would then divide 22 by the number of data points, in this case four, resulting in a mean of 5.5. This leads to the following determinations: M = 5.5 and n = 4.

The variance is determined by subtracting the value of the mean from each data point, resulting in -0.5, 1.5, -2.5 and 1.5. Each of those values are then squared, resulting in 0.25, 2.25, 6.25 and 2.25. The square values are then added together, resulting in a total of 11, which is then divided by the value of n-1, which is 3 in this instance, resulting in a variance approximately of 3.67.

The square root of the variance is then calculated, which results in a standard deviation measure of approximately 1.915.

Standard Deviation vs. Variance

The variance helps determine the data’s spread size when compared to the mean value. As the variance gets bigger, more variation in data values occurs, and there may be a larger gap between one data value and another. If the data values are all close together, the variance will be smaller. This is more difficult to grasp than are standard deviations, however, because variances represent a squared result that may not be meaningfully expressed on the same graph as the original dataset.

Standard deviations are usually easier to picture and apply. The standard deviation is expressed in the same unit of measurement as the data, which isn’t necessarily the case with the variance. Using the standard deviation, statisticians may determine if the data has a normal curve or other mathematical relationship. If the data behaves in a normal curve, then 68 percent of the data points will fall within one standard deviation of the average, or mean data point. Bigger variances cause more data points to fall outside the standard deviation. Smaller variances result in more data that is close to average.

What’s Standard Deviance Used For?

Standard deviation is an especially useful tool in investing and trading strategies as it helps measure market and security volatility – and predict performance trends.

As it relates to investing, for example, one can expect an index fund to have a low standard deviation versus its benchmark index, as the fund’s goal is to replicate the index. On the other hand, one can expect aggressive growth funds to have a high standard deviation from relative stock indices, as their portfolio managers make aggressive bets to generate higher-than-average returns.

A lower standard deviation isn’t necessarily preferable. It all depends on the investments one is making, and one’s willingness to assume risk. When dealing with the amount of deviation in their portfolios, investors should consider their personal tolerance for volatility and their overall investment objectives. More aggressive investors may be comfortable with an investment strategy that opts for vehicles with higher-than-average volatility, while more conservative investors may not.

Comments