Feel the Standard Normal Distribution
Drag the slider to see how much of N(0,1) lies within ±k standard deviations. Visualize the 68-95-99.7 rule in motion and build the intuition behind every z-table lookup.
Standard Normal — The Origin of Everything
Honestly — without this single curve, none of what follows (tests, confidence intervals, the t-distribution, regression) would work.
The standard normal N(0, 1) is a bell curve with mean 0 and standard deviation 1.
The one-line trick "z = (x − μ) / σ" lets every normal distribution collapse onto this same curve — and that's how a single paper table can compute probabilities for the entire world.
In other words, it's not the final boss of statistics; it's the origin. Once you own this, the rest of the page reads as "applications of the standard normal".
▶ "68 - 95 - 99.7" — no memorization, just see it
Slide the width k; the blue-filled area IS the probability.
± 1σ already covers ~68%, ± 2σ is 95%, ± 3σ is nearly everything.
That famous number z = 1.96? It's the two-tail 5% critical value — hypothesis tests and confidence intervals all start there.
▶ Watch every normal collapse onto "that one curve"
Height, IQ, daily stock returns, factory part errors — real-world normal-ish things all have different means and spreads.
Yet apply z = (x − μ) / σ and they all snap onto that pink curve.
It auto-plays on scroll (▶ to replay). That's why every statistical formula needs only one standard-normal table.
▼ What comes next
The Central Limit Theorem ahead says: "any distribution's mean approaches the standard normal".
Confidence intervals and hypothesis tests all use the "±1.96σ" numbers from this curve.
t, χ², F are its siblings. Even regression coefficient errors are approximated with the standard normal.
Short version: nail this one page and the rest becomes "applications". Have fun.