Calculate population or sample standard deviation from any data set. Enter numbers separated by commas.
Standard deviation quantifies the typical distance between individual data points and the mean (average) of a dataset. A small standard deviation means values cluster tightly around the mean; a large standard deviation means they spread widely. It is the most widely used measure of variability in statistics, science, finance, and quality control.
The calculation involves four steps: find the mean, subtract the mean from each value and square the result, average those squared differences (using n-1 for samples), then take the square root. The squaring step prevents positive and negative deviations from canceling out, and also weights larger deviations more heavily — a useful property that makes standard deviation sensitive to outliers.
For data that follows a normal (bell-shaped) distribution, standard deviation has a precise interpretive meaning. Approximately 68% of data falls within one standard deviation of the mean, 95% within two standard deviations, and 99.7% within three standard deviations. This rule, sometimes called the empirical rule, is the foundation for many statistical tests and quality control processes.
In practice: if IQ scores have a mean of 100 and a standard deviation of 15, then about 68% of people have IQs between 85 and 115, about 95% fall between 70 and 130, and about 99.7% fall between 55 and 145. A score of 145 is approximately 3 standard deviations above the mean — extremely rare by this rule.
The difference between the two formulas is the denominator: population uses N, sample uses N-1. The N-1 version (Bessel's correction) produces an unbiased estimate of the true population standard deviation when working from a sample. Using N instead would systematically underestimate the true spread.
The intuition: when you estimate the mean from your sample and then measure deviations from that estimate, you're using a value that is already tuned to your specific sample data. This introduces a slight bias toward underestimation. Dividing by N-1 corrects for this. As sample size grows, the difference between N and N-1 becomes negligible — for n=100, it's a 1% difference.
In investing, standard deviation is used as a measure of volatility — how much an asset's returns vary from their average. A stock with high standard deviation of returns is considered riskier than one with low standard deviation, because its future returns are less predictable. Modern portfolio theory, developed by Harry Markowitz, uses standard deviation to construct portfolios that maximize return for a given level of risk.
The Sharpe ratio, one of the most common risk-adjusted performance metrics, divides excess returns by standard deviation. A high Sharpe ratio means you're earning good returns relative to the volatility you're accepting. Two funds with identical returns but different standard deviations have different risk profiles — the one with lower standard deviation is generally preferable.