When you think about a hypothesis test as a decisionmaking tool, it's possible that you could be making some errors. Suppose, for example, you're in a clinical trial for a new drug. There are two possibilities: the drug is effective, or it is not.
When you use a hypothesis test as a decisionmaking tool, you might make a different decision. There are two possibilities for the decision you arrive at:
One of those two will be your conclusion.
However, there's only one thing that's actually true and fact. Suppose these are the four different possibilities. Two of them are the correct decisions.
Reality  
Drug is effective  Drug is not effective  

Decision  Reject H_{0}; decide drug is effective  Correct Decision 

Fail to reject H_{0}; decide drug isn't effective 

Correct Decision 
With the two correct decisions, if the drug was effective, you should reject the null hypothesis and decide that the drug is effective. Also, if the drug isn't effective, you should fail to reject the null hypothesis and decide that the drug isn't as effective as it would have needed to be to reject it.
The other two possibilities are considered a Type I error or a Type II error.
Reality  
Drug is effective  Drug is not effective  

Decision  Reject H_{0}; decide drug is effective  Correct Decision  Type I Error 
Fail to reject H_{0}; decide drug isn't effective  Type II Error  Correct Decision 
A Type I error is an error that occurs when a true null hypothesis is rejected. In the example above, a Type I error would happen when the drug is not effective, but you decide that it is effective. The drug is not effective, but you rejected the null hypothesis anyway. Based on your data, you thought that you had enough evidence to reject the null hypothesis, but, in fact, the drug is not effective.
A Type II error is an error that occurs when a false null hypothesis is not rejected. As you can see on the chart below, the drug was effective, but the data didn't make it clear enough, and so you failed to reject the null hypothesis. This incorrect decision would be considered a Type II error.
What are the consequences of each of those? Think back to a Type I error versus a Type II error.
A Type I error would have a consequence of you approving the drug and allowing the public to have it, even though it's not effective. You're also unleashing all the potential negative side effects that this drug might have. There's really no upside here and some negative consequences.
In a Type II error, you would not allow the drug to go to market because you think it's not effective when, in fact, it is. You would deny an effective drug to the public who might need it, because you didn't know it was effective, based on your data. This is another negative consequence. These errors always have negative consequences.
IN CONTEXT
In the criminal justice system, juries are told to presume that someone is innocent until proven guilty, meaning the null hypothesis is that the suspect is innocent, and the prosecution has to prove its case. What would a Type I and Type II error look like in this context?
A Type I error would be that the person is innocent, but they're convicted anyway.
A Type II error would be that the person is guilty, but the result of the trial is that they're acquitted.
Obviously, both of these are problematic, but the criminal justice system in America puts a lot of safeguards in place to make sure that a Type I error doesn't happen very often. In fact, the criminal justice system allows a Type II error to happen fairly frequently in order to reduce a Type I error.
You may think a Type I error is absolutely the worst thing you can do in this particular case, but it's not always this way. Sometimes a Type II error is worse. It depends on the situation, and so you have to analyze each situation to determine which one is a worse mistake to make.
Source: Adapted from Sophia tutorial by Jonathan Osters.