To begin with, recall that utilitarianism is the name given to any ethical theory that says something is good if, overall, it brings about utility (in other words, well-being or happiness). This is referred to as the utility principle.
Because of its focus on outcomes or consequences, utilitarianism doesn’t really care about people’s intentions. Utilitarianism seems to lose sight of a very important aspect of our ethical experience. We usually care a great deal about people’s intentions.
But as long as the consequences of their actions bring about utility, then the utilitarian is stuck with having to say what they did was good.
It doesn’t make sense to approve of these actions. Of course, we can be grateful for the happy accident. But we still want to say that ethics should have a way of saying that these actions are bad, even if they accidentally brought about something good.
Imagine you and your colleague are pilots. You are a conscientious and responsible pilot; she is a reckless and homicidal pilot. Due to terrible weather conditions and mechanical faults, you crash and passengers die. Your colleague tries to crash the plane on purpose, but accidentally lands safely and actually saves the life of a passenger having a heart attack by getting them closer to the medical assistance they need.
Now, the utilitarian would have to say that your action was bad because it had bad consequences; and say that your colleague's action was good because it had good consequences.
For most of us, we would want to be able to say that there are ethical reasons to prefer your actions over your colleague’s. But the utilitarian can’t provide these reasons.
It isn’t just the difference between good and bad intentions that utilitarianism struggles with. It also seems to give us strange ethical evaluations when people aren’t intentionally doing anything in particular.
To be held responsible for the consequences of things you didn’t do is counterintuitive. For instance, if you didn’t lock your car and a thief who steals it crashes and dies, then the utilitarian would say your lack of action (failing to lock your car) brought about bad consequences. Therefore, they would hold you responsible. This clearly doesn't seem right to most of us.
A problem that utilitarianism faces is that it can be very difficult to calculate the consequences of actions. And without a definitive calculation of the goodness or badness of consequences, utilitarianism seems to be unable to make ethical judgments of good and bad.
Unless you’re psychic or clairvoyant, there’s no way you can predict all possible outcomes. And the more complicated the world gets, the more difficult it is to predict the effects of your actions. The food you buy could contribute to the exploitation of workers on the other side of the world without you knowing about it.
There are other problems with the focus on consequences as well. The utilitarian doesn’t care about whether the consequences are specifically good for you, or for someone else. All that matters for them is that the amount of utility overall is higher. This seems strange because it ignores our justifiable inclination to care more about ourselves or loved ones than strangers.
Imagine you’re on the Titanic just before it hits the iceberg. Your family are on the other side of the ship when you realize it’s sinking. The utilitarian would say that you should make sure as many people make it onto lifeboats as possible.
This would mean that you shouldn’t try to go find your family because it would waste time you could be using to save the people that are nearer you.
Most of us wouldn’t blame someone for prioritizing their family in this situation. Something similar can be said about securing your own happiness.
We tend to think that only saints or martyrs need to go to this length to secure overall happiness. It seems unreasonable to expect that everyone must do this.