Type II Error
What It Is
Type II error, also known as beta error, can occur with classical (frequentist) statistical testing. Before the statistical test is conducted, you select the value of the p-value that will determine whether you reject the null hypothesis (usually of no difference between groups). A customary p-value level for rejecting the null hypothesis is anything below 0.05. if your statistical test returns a p-value above the level selected, you accept the null hypothesis. So far this sounds like I’m talking about type I error, doesn’t it? Here’s the difference: in “real life” (unbeknownst to you) the null hypothesis is false – you just happened to get a p-value above the threshold value you selected (usually, 0.05), causing you to make a type II error.
There are some “easy” ways to decrease your likelihood of making a type II error. First, you could increase the p-value level you set for rejecting the null hypothesis. This makes it easier to reject the null hypothesis. Of course, it also makes it easier to make a type I error, so there’s a tradeoff. Second, you can increase the number of people or objects in your study (sample size). The higher your sample size, the more likely you will increase your chances of rejecting the null hypothesis for a given and real difference between groups.
Why It’s Important
Making a type II error causes you to miss the fact that your groups are different, and therefore that the factor that makes them different may have an influence on an outcome. You may abandon a topic that has potential for better understanding of the mechanisms of a disease, the effectiveness of a treatment, or some other phenomenon of interest.