Source: Image of Socrates, Creative Commons, http://bit.ly/29ZntMM
Hello. I'm Glen. And this ethics tutorial is on problems with utilitarianism. As we go through the tutorial please keep in mind the definition for utilitarianism and its relationship to the utility principle.
In this tutorial, we're going to cover a few issues that arise from utilitarianism. One of them is that, when we focus only on consequences as utilitarianism asks us to do, we neglect other things that seem to be important, such as intent. Also, consequences don't always show us a clear choice in the action that we should take.
Further, not everyone's happiness is the same, and we sometimes want to consider that. And finally, it seems that when we consider only consequences, then we are therefore, responsible for not only what we choose, but also what we don't choose. Our choice is as much an affirmation as it is a negation of possibilities.
Utilitarianism tells us to focus on consequences instead of other factors, such as intent when we consider the moral goodness of an action. When we look at this, it may initially seem a little strange because we might want to consider more than simply the consequences. Utilitarianism is simply not interested in the motivations for actions. In that, it's a lot like science. Science and utilitarianism are both interested in the results, in the ends, in the result of an experiment, and the result of an action.
Now this has a couple of interesting consequences. Right. When we focus on consequences, there are interesting consequences. One of them is that intent then sometimes does not match well with the consequences. In fact, you can have someone with bad, evil, even sociopathic, malicious intent ending up doing something that leads to positive consequences and vice versa.
So it gives a couple of examples. One, let's say I tried to comfort someone by giving them a hug, and I just went up and gave them a hug because they seemed distressed. But it turns out, they don't like to be touched, and then they ran away in horror and hid for me. My intent was good, but when utilitarianism looks at the consequences of the action, it says no, you acted morally wrong.
Another example. Let's say I ate my roommate's ice cream in order to make him angry. The intent was to cause anger because I'm a mean, nasty person. But it turns out he was grateful because he's trying to lose weight, and then he thanked me for removing the temptation of eating the ice cream. So even though I had a malicious intent, the result was good, and therefore the action was morally good. So here we see the discrepancy that can arise from utilitarianism, which focuses only on consequences in determining the moral goodness of an action.
Following this point a little further leads us to the conclusion that consequences are often very difficult to predict and, therefore, weighing the possible consequences, the pros and cons of each also is difficult. And that means that predicting the goodness of an action is also difficult. There are a lot of difficulties here. We have the possibilities of an action, and we weigh what we understand to be the pros and cons of the situation, and it's still not clear. And we might not be sure what to do. Possible consequences, foreseeable consequences might not lead us down a clear path.
Here's a couple of examples. Let's say I want to build an oil pipeline. And the good consequences of this is that it will provide jobs, it will contribute to the economy, it will obviously provide fuel in needed areas, and there's a lot of good that can come from this. There's also a lot of potential bad that can come from this. There could be major spills which would cause environmental damage. There could be contamination of groundwater, and it's well, let's face it, it's aesthetically ugly to have a pipeline going through your neighborhood.
So looking at the possible consequences, it's not clear here what we should do about this one. And in retrospect, the pipeline one is looking forward. Let's look backwards. Look at something like the fire bombing of Dresden in World War II. The purpose to do this was to demoralize Germany. And it worked, and it helped win the war. But the thing is, the reason it demoralized Germany is because there is no strategic interest in Dresden. It's not a military place, it was the home of the art world of Germany, all the opera, music, and art, and literature.
It was all there, and it was all destroyed, and so were a whole bunch of people, families. And so there was an incredible loss of culture and of life there, but it helped to win the war. And you weigh these things. OK. Was it a good thing to do, or was it a bad thing to do? And you look at the consequences of what happened, and it's still unclear. so these are things to keep in mind to consider the weight that environmentalism places upon our decision making.
Further, another issue is that utilitarianism tends to treat everyone's happiness as equal in value, primarily in terms of quantity, because that's easiest to assess. And so there is the assumption that my happiness is the same as your happiness, is the same as her happiness, and the same as his happiness, and everyone's is pretty much the same.
And so if we quantitatively treat everyone equally, or even identily, then we're doing good. But this is counterintuitive because, as we all probably can recognize, what makes me happy is a little bit different than what makes you happy. And so there's a difference in both quality and quantity.
For example, let's say I have a bunch of guests over to my house, a varied number of guests of different age and size and background. And I'm serving them dinner, and utilitarianism would say that the morally good thing is to serve all of them an equal amount of food, absolutely equal amount of food because that's the right thing to do and to treat everyone equitably. But really, if they differ in age and size and diets and restrictions and so forth, then that's not the right thing to do.
Here's a bit of research for you. Look up the famous trolley problem. Quantitatively, if we go with strict utilitarianism, based upon treating everyone's happiness equally, well, it's pretty clear what I should do in this problem. I should kill the individual to save the five, however, the problem is constructed or construed. But this, again, seems counterintuitive and problematic.
And the last issue we're going to consider is that the agent, according to utilitarianism, appears to be equally responsible for what they both choose and what they don't choose. And what they don't choose is called a negative choice. It's an affirmation of what I'm not doing. I'm holding someone accountable or someone's holding me accountable for what they don't do, and this can be problematic. If I turn right instead of left, I'm negating all of the possibilities that come from turning left, and am I responsible for that? I'm not really sure.
If I choose Taco Bell over Taco John's, am I responsible for all of the possibilities that will not happen because I don't go to Taco John's, and I don't provide Taco John's with my patronage and that sort of thing. So a couple of interesting things can arise from this when we look at specific examples. And these examples might seem a little extreme, but they illustrate the point. For first of all, if I choose not to donate a kidney to my sister, who happens to need one, she will most likely die. So I'm negating that possibility.
Am I therefore, guilty of murder or at least some form of unintentional homicide or even intentional homicide? Am I, because I don't give her a kidney? If I don't give a homeless person money, am I responsible for that person going hungry? So what focusing on these consequences does is seemingly to imply that I'm responsible not only for what I do, but what I don't do. So how you react to these examples will pretty much reveal how you react primarily to utilitarianism overall.
In this tutorial, we've covered a few issues that arise from utilitarianism, including problems that arise from only focusing on consequences, how consequences don't always show a clear choice of action, how everyone's happiness, although assumed by utilitarianism to be the same, probably isn't, and how it seems to imply that I'm responsible, not only for the things that I choose to do, but what I choose not to do.