Standard Deviation
Population: σ = √( Σ(x⊂i; − μ)² / N ) Sample: s = √( Σ(x⊂i; − x̅)² / (N−1) )

Step 1: find the mean. Step 2: subtract the mean from each value and square it. Step 3: average those squared differences (÷N for population, ÷N-1 for sample). Step 4: take the square root.

Sample std dev (most common)
=STDEV(A1:A10)
Population std dev
=STDEVP(A1:A10)
Variance (sample)
=VAR(A1:A10)

What Standard Deviation Measures

Standard deviation quantifies the typical distance between individual data points and the mean (average) of a dataset. A small standard deviation means values cluster tightly around the mean; a large standard deviation means they spread widely. It is the most widely used measure of variability in statistics, science, finance, and quality control.

The calculation involves four steps: find the mean, subtract the mean from each value and square the result, average those squared differences (using n-1 for samples), then take the square root. The squaring step prevents positive and negative deviations from canceling out, and also weights larger deviations more heavily — a useful property that makes standard deviation sensitive to outliers.

The 68-95-99.7 Rule

For data that follows a normal (bell-shaped) distribution, standard deviation has a precise interpretive meaning. Approximately 68% of data falls within one standard deviation of the mean, 95% within two standard deviations, and 99.7% within three standard deviations. This rule, sometimes called the empirical rule, is the foundation for many statistical tests and quality control processes.

In practice: if IQ scores have a mean of 100 and a standard deviation of 15, then about 68% of people have IQs between 85 and 115, about 95% fall between 70 and 130, and about 99.7% fall between 55 and 145. A score of 145 is approximately 3 standard deviations above the mean — extremely rare by this rule.

Sample vs Population Standard Deviation

The difference between the two formulas is the denominator: population uses N, sample uses N-1. The N-1 version (Bessel's correction) produces an unbiased estimate of the true population standard deviation when working from a sample. Using N instead would systematically underestimate the true spread.

The intuition: when you estimate the mean from your sample and then measure deviations from that estimate, you're using a value that is already tuned to your specific sample data. This introduces a slight bias toward underestimation. Dividing by N-1 corrects for this. As sample size grows, the difference between N and N-1 becomes negligible — for n=100, it's a 1% difference.

Standard Deviation in Finance and Risk

In investing, standard deviation is used as a measure of volatility — how much an asset's returns vary from their average. A stock with high standard deviation of returns is considered riskier than one with low standard deviation, because its future returns are less predictable. Modern portfolio theory, developed by Harry Markowitz, uses standard deviation to construct portfolios that maximize return for a given level of risk.

The Sharpe ratio, one of the most common risk-adjusted performance metrics, divides excess returns by standard deviation. A high Sharpe ratio means you're earning good returns relative to the volatility you're accepting. Two funds with identical returns but different standard deviations have different risk profiles — the one with lower standard deviation is generally preferable.

Frequently Asked Questions

Use sample (s) almost always — it's for data that is a subset of a larger group. Use population (σ) only when you have every single member of the group with no one left out, which is rare.
This is called Bessel's correction. When estimating from a sample, dividing by n would slightly underestimate the true variability. Using n-1 corrects for this bias.
It measures how spread out data is around the mean. In a normal distribution, about 68% of data falls within 1 standard deviation of the mean, 95% within 2, and 99.7% within 3.
Variance is the average of squared differences from the mean. Standard deviation is the square root of variance — making it in the same units as the original data, which is easier to interpret.