3 Tutorials that teach Problems with Utilitarianism
Take your pick:
Problems with Utilitarianism

Problems with Utilitarianism

Author: John Lumsden

Identify common problems surrounding utilitarianism

See More
Introduction to Psychology

Analyze this:
Our Intro to Psych Course is only $329.

Sophia college courses cost up to 80% less than traditional courses*. Start a free trial now.


In this tutorial we will look at some of the potential drawbacks of utilitarianism, focusing on its inability to accommodate some of our basic ethical ideas. Our discussion will break down like this:
  1. Intentions and Actions
  2. Calculating Consequences

1. Intentions and Actions

To begin with, recall that utilitarianism is the name given to any ethical theory that says something is good if, overall, it brings about utility (in other words, well-being or happiness). This is referred to as the utility principle.

Because of its focus on outcomes or consequences, utilitarianism doesn’t really care about people’s intentions. Utilitarianism seems to lose sight of a very important aspect of our ethical experience. We usually care a great deal about people’s intentions.

Wouldn’t you want to be able to say that someone’s cruel actions are bad, even if the outcome was good?

But as long as the consequences of their actions bring about utility, then the utilitarian is stuck with having to say what they did was good.

If someone set out to exterminate a whole people with a biological weapon, the only reasonable ethical evaluation is to say this action is bad. But if this weapon accidentally cured a widespread disease (instead of killing, as was intended), then the utilitarian would have to say it was good.

It doesn’t make sense to approve of these actions. Of course, we can be grateful for the happy accident. But we still want to say that ethics should have a way of saying that these actions are bad, even if they accidentally brought about something good.


Imagine you and your colleague are pilots. You are a conscientious and responsible pilot; she is a reckless and homicidal pilot. Due to terrible weather conditions and mechanical faults, you crash and passengers die. Your colleague tries to crash the plane on purpose, but accidentally lands safely and actually saves the life of a passenger having a heart attack by getting them closer to the medical assistance they need.

Now, the utilitarian would have to say that your action was bad because it had bad consequences; and say that your colleague's action was good because it had good consequences.

For most of us, we would want to be able to say that there are ethical reasons to prefer your actions over your colleague’s. But the utilitarian can’t provide these reasons.

It isn’t just the difference between good and bad intentions that utilitarianism struggles with. It also seems to give us strange ethical evaluations when people aren’t intentionally doing anything in particular.

You’re reading or listening to this tutorial now. If you were out volunteering for charity instead, you might save lives. You’re not intending to allow death while studying at this moment. But if you could’ve saved lives, then the utilitarian would have to hold you responsible anyway.

To be held responsible for the consequences of things you didn’t do is counterintuitive. For instance, if you didn’t lock your car and a thief who steals it crashes and dies, then the utilitarian would say your lack of action (failing to lock your car) brought about bad consequences. Therefore, they would hold you responsible. This clearly doesn't seem right to most of us.

2. Calculating Consequences

A problem that utilitarianism faces is that it can be very difficult to calculate the consequences of actions. And without a definitive calculation of the goodness or badness of consequences, utilitarianism seems to be unable to make ethical judgments of good and bad.

When you’re trying to decide what career path to take, how can you be sure the one you choose will provide the most utility? You might have a good idea about which one you will enjoy more. But perhaps you’re fired because your industry moves to another country. Or maybe your industry actually contributes to the exploitation and misery of other people.

Unless you’re psychic or clairvoyant, there’s no way you can predict all possible outcomes. And the more complicated the world gets, the more difficult it is to predict the effects of your actions. The food you buy could contribute to the exploitation of workers on the other side of the world without you knowing about it.

There are other problems with the focus on consequences as well. The utilitarian doesn’t care about whether the consequences are specifically good for you, or for someone else. All that matters for them is that the amount of utility overall is higher. This seems strange because it ignores our justifiable inclination to care more about ourselves or loved ones than strangers.


Imagine you’re on the Titanic just before it hits the iceberg. Your family are on the other side of the ship when you realize it’s sinking. The utilitarian would say that you should make sure as many people make it onto lifeboats as possible.

This would mean that you shouldn’t try to go find your family because it would waste time you could be using to save the people that are nearer you.

Most of us wouldn’t blame someone for prioritizing their family in this situation. Something similar can be said about securing your own happiness.

If you can increase happiness more by volunteering every spare moment you have, then the utilitarian will say you should do it.

We tend to think that only saints or martyrs need to go to this length to secure overall happiness. It seems unreasonable to expect that everyone must do this.

We started this tutorial by looking at the way utilitarianism deals with intentions and actions in its ethical judgments. Since the utility of consequences can be judged without reference to intention, utilitarianism can ignore them. But this leads to various counterintuitive results.

We also saw some of the problems involved in calculating consequences. In particular, the difficulty of predicting consequences was seen to undermine the goal of increasing utility. Finally, we saw that utilitarianism forces us to make strangers’ happiness as important as our own.