Z-Score
z = (x − μ) / σ x = your value μ = mean σ = standard deviation

The z-score standardizes any value onto a common scale, making it possible to compare values from different distributions.

x in A1, mean in B1, std dev in C1
=(A1-B1)/C1
Using STANDARDIZE function
=STANDARDIZE(A1,B1,C1)

What a Z-Score Tells You

A z-score (also called a standard score) tells you how many standard deviations a particular value is above or below the mean of its distribution. A z-score of 0 means the value equals the mean exactly. A z-score of +1.5 means the value is 1.5 standard deviations above the mean. A z-score of -2 means 2 standard deviations below the mean.

Z-scores are powerful because they standardize values from different distributions onto a common scale, making comparison possible. A student who scores 72 on a test with mean 65 and standard deviation 5 has a z-score of 1.4 — better than about 92% of test-takers. A student who scores 88 on a different test with mean 80 and standard deviation 12 has a z-score of 0.67 — better than about 75% of takers. The z-scores enable fair comparison despite different scales.

Z-Scores and the Normal Distribution

Z-scores are most powerful when applied to normally distributed data. The standard normal distribution (mean=0, SD=1) has well-characterized probabilities for any given z-score, tabulated in z-tables and computed by statistical software. A z-score of 1.96 corresponds to the 97.5th percentile — the basis for the common 95% confidence interval, where the critical z-values are ±1.96.

The correspondence between z-scores and percentiles allows precise probability statements: a value with z = 2.33 is at the 99th percentile, meaning only 1% of values in a normal distribution exceed it. A value with z = -1.65 is at the 5th percentile. These specific z-values appear constantly in statistical hypothesis testing and quality control.

Z-Scores in Hypothesis Testing

The z-test is one of the most fundamental statistical tests. Given a sample mean, a population mean under the null hypothesis, and a known standard deviation, the z-statistic tells you how unusual your sample result would be if the null hypothesis were true. Large absolute z-values (typically |z| > 1.96 for a two-tailed test at α=0.05) lead to rejection of the null hypothesis.

The logic: if the null hypothesis is true, sample means from repeated sampling would follow a normal distribution. A z-score of 2.5 means your sample result is 2.5 standard deviations from what the null hypothesis predicts — an outcome that would occur only about 1.2% of the time by chance. This low probability becomes the p-value, which guides the decision to reject or retain the null hypothesis.

Applications Beyond Statistics Classes

Finance uses z-scores for bond yield comparisons, risk scoring, and the Altman Z-Score (a bankruptcy prediction model). Quality control uses z-scores in Six Sigma processes — a "six sigma" process means defects occur more than 6 standard deviations from the mean, representing about 3.4 defects per million opportunities. Sports analytics uses z-scores to compare player performance across different eras, leagues, or statistical environments where raw numbers aren't directly comparable.

Frequently Asked Questions

How many standard deviations a value is above or below the mean. A z-score of 0 means the value equals the mean. +1 means one standard deviation above. -2 means two standard deviations below.
It depends on context. In a normal distribution, about 68% of data falls within z = ±1, 95% within z = ±2, and 99.7% within z = ±3. A z-score beyond ±2 is unusual, beyond ±3 is very rare.
Z-scores are used to calculate p-values. You compare your z-score to a critical value (1.96 for 95% confidence) to decide whether to reject the null hypothesis. If |z| > critical value, the result is statistically significant.