This tutorial talks about significance level. Significance level is also called alpha. And that symbol looks like that.
Now, the significance level-- what it's doing-- it's telling us the probability of making a type I error. Now, a type I error is when you reject the null hypothesis, even though it's true. So whatever significance level we pick is going to tell us what's the likelihood of that happening.
Now, before you do your hypothesis test, that's when you pick your significance level. If you wait until afterwards, then it's not being completely honest. What level of alpha you pick depends on your problem and the impact of having a type I error.
We can't just create a super, super low alpha and say that we're going to make positive that there is no chance that there's a type I error because there's some trade-offs with that. One, it's going to increase the probability of a type II error. And two, it's going to increase our costs or things like that.
Typically, significance levels are 1%, 5%, or 10%. And more often than not, the standard significance level is 5%. So when you see a problem, and it says that the alpha is 5%, or the alpha is 0.05, that's telling you what the significance level is. It's telling you what your probability of making a type I error is, and it's going to drive decisions about critical values. This has been your tutorial on significance level.
This tutorial talks about the power of a hypothesis test. Now, the power of hypothesis test is telling us the probability of not making a type II error. As a reminder, a type II error is when the null hypothesis isn't rejected, even though it isn't true. So that's something that we don't want to happen. But it's going to on occasion.
Typically it's calculated with technology. So I'm not going to show you how to calculate it here. I'm just going to show you a little bit about how it can be changed and what effects that has.
If you want to increase your power, so if you want to increase the probability of not making a type II error, of not having the null hypothesis be accepted even though it's not true, we can increase the power by increasing the significance level, increasing your alpha. When that happens, we're going to increase the risk of a type I error as well. And we can also use a larger sample size. But this would mean more time and more money.
Here's an example looking at the criminal justice system. So with that, our null hypothesis is that the defendant is innocent. And the alternative hypothesis is that he's guilty, that he's not innocent. A type I error is when we convict an innocent man. And a type II error is when we free the guilty man.
So the first plan is if we free everybody. If we free everybody, the power of our hypothesis test is zero. There's no chance of correctly convicting a guilty person. However, if we did the opposite, if we swung in the opposite direction and convicted everybody, our power is going to be 1 because the probability of incorrectly convicting an innocent person is 100% because we're convicting everyone. So the chance of convicting somebody who's innocent is 100%
So this is showing us the trade-off between the type I and Type II errors as we increase our hypothesis power. Even though these are very extreme examples, the same happens with minute levels along the way. So this has been your tutorial about the power of hypothesis tests.