Or

4
Tutorials that teach
Statistical Significance

Take your pick:

Tutorial

Hi. This tutorial covers statistical significance. So let's start by defining both statistical significance as well as practical significance. So statistical significance is a quantitative assessment of whether observations reflect a pattern or are just due to chance.

Now, practical significance is an arbitrary assessment of whether observations reflect a practical, real-world use. So the big difference here-- statistical significance, we're actually doing a quantitative assessment, whereas practical is more arbitrary. You're just kind of thinking, well, does this seem like it's a significant difference, whereas statistical significance, you're actually quantifying. You're saying, is there a significant difference or is there not a significant difference?

All right, so let's take a look at an example here. Suppose you have reason to believe that a coin being flipped is not a fair coin. So instead of getting 50/50 heads and tails, maybe you think that it has some other possible outcomes. So you flip this coin 50 times and calculated the proportion of heads. So since you did 50 flips, this is going to be a sample proportion. And you found that p hat is equal to 0.55-- so a little bit different than a 50/50 split. A friend flipped the same coin and found p hat to equal 0.75-- so a much larger proportion of heads in that second sample proportion.

So a hypothesis test could be performed for each sample proportion with the following hypothesis. So your null hypothesis here is that p equals 0.5 so that the proportion-- the population proportion of heads is 50% with an alternative hypothesis of your population proportion being greater than 0.5.

All right, now, what that hypothesis test will help you do is determine what level of difference from an assumed claim will constitute statistical significance? And a result is considered statistically significant if it is unlikely to have occurred by chance.

So if we were to run a hypothesis test for both the 0.55 sample proportion and the 0.75 sample proportion, since p equals 0.5 for a fair coin, your friend's sample proportion of 0.75 is more statistically significant than your sample proportion of 0.55. So the 0.75 is going to be much more likely to be statistically significant than the 0.55.

So we would likely be able to say, OK, it seems like-- based on your friend's sample proportion, we would seem that it's probably not a fair coin. But we're not quite sure about this 0.55. We're not sure if that difference is significant enough to be statistically significant.

Now, statistical significance is not the same as practical significance, especially with a large sample size. Rather, small differences in practical terms can be found statistically significant. So let's consider the coin flip again. And let's say we flip the coin 10,000 times. And let's say we still got that p hat value of 0.55.

Well, if we flipped a coin 10,000 times, chances are this p hat value will be statistically significant, because with that many flips, being about 0.05 off of the null hypothesis of 0.5 is probably going to be statistically significant. So even though that 0.05 isn't that far in practical terms, if we take a large enough sample, this sample proportion will likely be statistically significant. All right, this has been your tutorial on statistical significance. Thanks for watching.