I was a bit confused by the article initially:<p>> Perhaps most simply, with a t-statistic of 2, your 95% confidence intervals will nearly touch 0.<p>Your 95% CI <i>will</i> include 0, unless you have more than 50 or so data points, in which case there's no point in using Student's t-distribution, might as well use the Gaussian, which the author seems to assume, and which I thought gave rise to the z-score (in my mind, t-statistic = t-distribution, z-score = normal distribution).<p>But then looking things up, it turns out that difference is that the z-score is computed with population mean and sd, while the t-statistic is computed with sample mean and sd. So, yeah, practically you'll use the t-statistic (and it will be t-distributed if the population is normally distributed), unless you already know population mean and sd, in which case you can compute the z-score (which will approach the normal distribution by CLT under certain conditions with large enough samples, but is otherwise not predicated on normality in any way).<p>Then all the author was pointing out is that if we take a +/- 2 standard error CI, then if your statistic is 2, the CI goes from 0 to 4, giving rise to a 100% "half-width" of the CI, while if your statistic is 4, say, the CI goes from 2 to 6, giving rise to just 50% half-width.