Use Sophia to knock out your gen-ed requirements quickly and affordably. Learn more
×

Significance Level and Probability of Making Type I Error

Author: Sophia

What's Covered


In this tutorial, you're going to learn about the significance level of a hypothesis test.Specifically, you will focus on:

  1. Hypothesis Test

1. HYPOTHESIS TEST

You should first understand what you mean by statistical significance. When you calculate a test statistic in a hypothesis test, you can calculate the p-value. The p-value is the probability that you would have obtained a statistic as large as the one you got, as small as the one you got or as extreme, far away from our hypothesized value, as the one you got, given that the null hypothesis is true. It's a conditional probability.

Sometimes you’re willing to attribute whatever difference you found between your statistic and your parameter to chance. If this is the case, you fail to reject the null hypothesis, if you’re willing to write off of the differences between your statistic and your hypothesized parameter. If you’re not, if it's just too far away from the mean to attribute to chance, you’re going to reject the null hypothesis in favor of the alternative.

So this is what it might look like for a two-tailed test.

The hypothesized mean is right in the center of the normal distribution. Anything that is considered to be too far away, something like two standard deviations or more away, you might consider that to be too far away. In that case, anything you might attribute to chance and fail to reject the null hypothesis, whereas anything that you consider just too far away will reject the null hypothesis, if it's just too high or too low for you to continue believing that this is the real mean. Again, this is assuming that the null hypothesis is true.

Now think about this. All of this curve assumes that the null hypothesis is true. But you make a decision to reject the null hypothesis anyway if the statistic you got is far away. It means that this would rarely happen by chance. But it's still the wrong thing to do technically, if the null hypothesis is true. This idea that we're comfortable making some error sometimes is called a significance level.

The probability of rejecting the null hypothesis in error, so rejecting when the null hypothesis is in fact true, is called a Type I Error.

The nice thing is you get to choose how big you want this error to be.

If you remember on the blue sections of that normal curve, you could have stated you deem too far away to be three standard deviations from the mean on either side. You get to choose how large you want this error to be. That value that you say you only want to be wrong 1% of the time, or are OK being wrong 5% of the time rejecting the null hypothesis in error that often, is called a significance level.

And we denote it with the Greek letter alpha.

When you choose how big we want alpha to be, you do it before we start the tests. You do it this way to reduce bias, because if you already ran the tests, you could choose an alpha level that would automatically make your result seem more significant than it is. You don't want to bias your results that way.

Take a look back at this visual here.

The alpha in this case is 0.05. If you recall, the 68-95-99.7 rule says that 95% of the values will fall within two standard deviations of the mean, meaning that 5% of the values will fall outside of those two standard deviations. Your decision to reject the null hypothesis will be 5% of the time, the most extreme 5% of cases, you will not be willing to attribute to chance variation from the hypothesized mean.

The level of significance will depend on the type of experiment that you're doing. Suppose you are trying to bring a drug to market. You want to be really, really cautious about how often you reject the null hypothesis. You will reject the null hypothesis if you’re pretty certain that the drug will work.

You don't want to reject the null hypothesis of the drug not working in error and thereby giving the public a drug that doesn't work. If you want to be really cautious and not reject the null hypothesis in error very much, you'll choose a low significance level, like 0.01. This means that only the most extreme 1% of cases will have the null hypothesis rejected.

If you don't believe a Type I Error is going to be that bad, you might allow the significance level to be something higher, like 0.05 or 0.10. Those still seem like low numbers. But think about what that means. This means that one out of every 20, or one out of every 10 samples of that particular size will have the null hypothesis rejected even when it's true.

Term to Know

  • Significance Level
  • The probability of making a type I error. Abbreviated with the symbol, alpha open parentheses alpha close parentheses.

Are you willing to make that mistake one out of every 20 times or once every 10 times? Or are you only willing to make that mistake one out of every 100 times? Setting this value to something really low reduces the probability that you make that error.

The only thing to note here is that you don't want it to be too low. Be careful how low you set it. The problem with setting it really low is that as you lower the value of a Type 1 Error, you actually increase the probability of a Type II Error. A Type II Error is failing to reject the null hypothesis when a difference does exist. This reduces the power or the sensitivity of your significance test, meaning that you will not be able to detect very real differences from the null hypothesis when they in fact exist if our alpha level is set too low.

Summary

The probability of a Type 1 Error is a value that you get to choose in a hypothesis test. You call it the significance level. It's denoted with the Greek letter alpha.

Choosing a big one allows you to reject the null hypothesis more often, though the problem is sometimes we reject the null hypothesis in error. When the difference really doesn't exist, you say that a difference does exist. However, if you choose a really small one, you reject the null hypothesis less often.

Sometimes you fail to reject the null hypothesis in error as well. There's no foolproof method here. Usually you want to keep your significance levels low, something like 0.05 or 0.01. 0.05 is the default choice for most significance tests for most hypothesis testing.

Good luck.


Source: This work adapted from Sophia Author Jonathan Osters.

Terms to Know
Significance Level

The probability of making a type I error. Abbreviated with the symbol, alpha open parentheses alpha close parentheses.