Type I Error

What It Is

Type I error, also known as alpha error, can occur with classical (frequentist) statistical testing. Before the statistical test is conducted, you select the value of the p-value that will determine whether you reject the null hypothesis (usually of no difference between groups). A customary p-value level is 0.05. If your statistical test returns a p-value below the level selected, you reject the null hypothesis. However, in “real life” (unbeknownst to you) the null hypothesis is true – you just happened to get a low p-value. With p=0.05, this means you had a 5% chance of incorrectly rejecting the null hypothesis and in fact it happened. The test “made a mistake,” i.e., a type I error, by rejecting the null hypothesis when it shouldn’t have. This can occur because you collected data on a sample of patients (or objects or visits, etc.) and not on every patient (object, visit) of interest.

Why It’s Important

Type I errors lead you astray by giving you a false clue that the null hypothesis is false (usually, by making you think there is a difference between groups when actually there isn’t one). Type I errors can be minimized by choosing a low p-value. The tradeoff to this is that you may reduce the statistical power of your test for a given sample size (or number of patients). The chance of committing a Type I error can be increased by performing many statistical tests on the same data. There is an old saying, “If you torture your data it will confess.” The more statistical tests you do with a dataset, the greater your likelihood of committing a type I error unless you employ some other methods to keep the type I error in check.