On to Consequentialism, introduced at Slide #17, the basic stance saying that the moral status of an action is all about the the value of its outcome for the world, i.e., we have only one duty—to act so as to bring about the best overall outcome for the world. But depending on how what the theorist counts as better or worse outcomes for the world, you get different kinds of Consequentialisms. In the class, I mentioned three (Slide #18):
Utilitarianism = Consequentialism where the goodness of outcome measured in terms of overall happiness.
Hedonic Utilitarianism = Consequentialism where happiness measure in terms of pleasure/pain.
Preference Utilitarianism = Consequentialism where happiness measured in terms of preference satisfaction.
So, in answer to these:
So consequentialism is utilitarianism?
Is consequentialism similar to utilitarianism?
is it right to say that consequentialism is something like the utilitarian approach?
Watch the direction of your “is”–Utilitarianism is a type of Consequentialism but not all versions of Consequentialism need be Utilitarian. Ok, on to the questions. But don’t forget the other posts I’ve made that cover a lot of relevant ground as well (see this and this).
Isn’t the “happiness” in utilitarianism referring to the amount of happiness for self? And that I should do whatever brings me the most happiness as a utilitarian.
Is there such a thing as utilitarianism but only for personal happiness? Does that make me a pleasure-pain theorist? Or DST/PST?
Does a consequentialist weigh both scenarios by the amount of guilt they feel after a choice as well?
Nope–do watch the earlier definition. We are talking about the best overall outcome for the world. The greatest happiness for the greatest number, to use the old slogan. Your own happiness count too–just add it to your calculation of the happiness of the world. Or for that matter, your feeling of guilt too. But your duty isn’t to maximize your own happiness. Your Utilitarian duty is to maximize the world‘s happiness.
For a hedonist utilitarian, if the action causes the individual immense pain, but brings about great pleasure for the rest of the world, he is expected to sacrifice himself? even though his individual pain is more then his pleasure?
Would a utilitarian argue that euthanasia for mentally ill people is morally wrong? Eg. if the mentally ill person suffers from severe depression and wants to end his/her life, but has almost discovered the cure for cancer
Yes. Give the basic definition introduced. Hence the “demandingness” objection introduced at Slide #29. Obviously, any sophisticated Utilitarian is going to find ways to respond to this. Watch out also for the Peter Singer reading for W05.
If euthanising the mentally ill person will bring about the greatest happiness of the greatest number–the happiness of the mentally ill person having been included in the calculation–yes. I told you this can be an unappealing doctrine when you think through that it is basically saying.
What if your version of “greater happiness of the world” involves killing many people (If you really believe this like thanos)? Based on consequentialism, is he morally right?
First of all, I’m pretty confident that– at least given the information I have–you can’t increase the world’s happiness by killing many people Thanos style. But otherwise, if it really is the case that you can, and if Utilitarianism is true, that’s the right action.
what is the difference between happiness and pleasure in philosophy?
Whats the difference between overall happiness (utilitarianism) and overall net pleasure/pain (hedonistic utitlitarianism)?
I’m using happiness here as the more general concept for well-being. For pleasure, see W03. You can think of Hedonic Utilitarians as Consequentialists who also adopt the Pleasure/Pain Theory of Well-being as their theory for determining the value of the overall come for the world.
Does Preference Utilitarianism mean you want as many preferences satisfied as possible?
Yes. At least that’s one way to do it. It gets complicated quickly, obviously, but you don’t need to worry about the complications. There are plenty of other things to worry about already. Sometimes, all I need you to do is to be able to grapple with the issues on a narrower front. If you are really interested, I will look for an optional reading.
Under utilitarianism, if a very well off person suffers the same decrease in utility as a person who is already suffering badly, is it considered the same?
Let’s say that you are faced with either bringing about (A) a world in which a very well off person suffers a decrease of X in utility vs. (B) a world in which a person who is already suffering badly suffers another decrease of X in utility, and everything else is the same. Then, under Utilitarianism, there’s no different, morally speaking, between the course of action that brings about A rather than B. Do note, however, that under real world conditions, it may be more costly to prevent the well off person’s suffering compared to the other person’s suffering, that is, things are not all equal.
what does consequentialism say wrt people with bad intentions but bring about a net happiness in the world
What if two actions with two different intentions (one is malicious the other is well-intended) end up with the same outcome?
For the Consequentialist, it’s not going to matter. I’m assuming that your “net happiness” calculation has already included whatever dis-value to the world’s happiness caused by the existence of that bad intention.
In this weeks reading, a quote from Mill’s On Liberty is given claiming that the state is only allowed to exert force on an individual to prevent harm to others, isn’t that more of a deontological stance rather than utilitarian?(since its abt rights)
It can be taken that way in the abstract if you don’t worry about the larger context of Mill’s thinking. Also, keep in mind that he is talking specifically about what the state is allowed to do, rather than what it is morally ok for individuals to do. In any case, Mill can be complicated. Exactly how Utilitarian he is is a matter of discussion. At the very least, he is definitely not the sort of Utilitarian Jeremy Bentham is.
On to Warlord.
Isn’t that Che Guevara?
You must have confused Pedro with his long lost twin brother doppelganger? (Just kidding; yes, that’s a photo of Che.)
Ok, more seriously, do watch the precise nature of the challenge presented–it’s not about the potential conflict between Deontological norms. Rather the thought is–Conceivably, obeying a Deontological norm may lead to there being more violations of the same norm. But if we take the Deontology seriously, we are supposed to say that, nonetheless, our duty is to obey the norm. Therefore, Deontology includes within itself a permission to make the world a worse place morally speaking, and by the very measure of the norm we are stipulating for the sake of the discussion. And this seems irrational and thus a problem for the Deontological way of thinking.
so then it becomes one’s duty to shoulder moral responsibility for the decision he makes, even if either decision he makes results in something awful? shouldn’t there be a different metric to gauge the moral ‘rightness’ of such a decision
Ok, but notice that this is just a very strident statement of the Deontological standpoint–my moral is to do what’s right, come what may! And the worry is exactly that this standpoint is going together with a permission to make the world a worse place, morally speaking, by the very same measure of same duty at stake.
for the deontologist, can he view that instead of him intentionally causing harm to that one person, he view it as he is saving the others as the whole group were gg to be tortured anyways. so his norm become duty to save others?
There are two ways to think of the above. If the proposal is that we not think in terms of a norm not to intentionally cause harm, but “a duty to save others instead”, we haven’t actually confronted the challenge head on. We are just changing the terms of the scenario. And if we were to use this new proposal as the norm to think about–“a duty to save others”–are we confident that the world cannot be such that my very obedience to this norm will never lead to many more violations of the same norm? If not, the original problem remains. You need to appreciate the highly generalizable nature of the challenge.
A slightly more promising way to understand the above is to say that a workable Deontology will never consist of one categorical norm. There will always be a bunch that can ‘cover for each other’ so to speak. And meta-norms telling us how to resolve conflicts between norms, and so on. This is actually the best way to construe the following proposals:
what if a villager voluntarily offers himself to be tortured to let the others go free? If you accept his requests is that still morally permissible
What happens if you rephrase it as, choose all but one to not be tortured?
In other words, introduce additional norms that cover for the action of torturing the one villager, so that the rest can escape. But here’s a thought–will all that really help defuse Warlord? Is it really inconceivable that my conforming to the whole package of Deontological norms can lead to many more violations of the same package?
Obviously, as with all other “worries” and “challenges” we have encountered recently, the above doesn’t have to be taken as presenting a fatal objection. Warlord points to gaps that serious Deontologists will need to fill if they want to keep their theory. (If you are keen, here’s a slightly older discussion that I think is still very helpful by one of my profs in grad school.)
Finally, the Trolley vs. Transplant thing. First, someone’s contribution:
Alright, on to actual questions…
Isn’t it morally impermissible to choose to kill anyone, no matter 1 or 5 lives?
There you have it–the Deonotological stance!
i feel like if your are tied up on train tracks theres no expectation for anyone to save you(you expect to die/get tortured), but when you go to the doctor, there is a duty of care between the doctor and you
For the doctor taking away your organs, is it not more of work ethics than whether or not it is morally right as a deontologist or consequentialist? Or is work ethic a subset of deontology?
i feel like if your are tied up on train tracks theres no expectation for anyone to save you, but when you go to the doctor, there is a duty of care between the doctor and you
Not sure if people noticed but I did mention that you can build up a Deontological theory that includes duties relating to the relationships you have and the roles you occupy. So yeah, Transplant is covered. But note that Transplant is already nicely covered by the Deontological stand point–the issue for the Deontologist is Trolley. Here, unless you want to say that just because the 6 on the track are not related to you you have no obligation to do anything, you will still be faced with an issue–surely it’s at least permissible for the guy to push the level so that 1 rather than 5 die?
On the other hand, if you are coming from a Consequentialist point of view, Trolley is ok for you–it’s our intuitions in Transplant that’s causing problems. Can your position be helped by including such considerations as relationships and roles. Not really. Let’s say that you are Utilitarian. Then, the happiness engendered by people caring about those with whom they have a relationship, and people doing their jobs, and conversely, any unhappiness engendered by the opposite, must be included in the calculation of the overall outcome’s goodness. But that’s all. To do more than that is to give up on your Utilitarianism.
is it possible to argue that when one makes the deliberate decision to kill the one healthy/safe person rather than letting ‘fate’ do its thing and taking the lives of the five people, that that is what is wrong?the five people are already imperiled?
No matter which direction you come from (whether Deontological or Consequentialist), if you want to keep both intuitions (in Trolley and in Transplant), your job will be to find a way to make a moral distinction between the two cases. The tricky part, of course, is to make sure that any such distinction you draw is based upon a principle consistent with the theory you are trying to hold. A “killing vs. letting die” distinction, for instance, would be more friendly towards Deontology than Consequentialism.
Regarding the trolley problem, what would the outcome be if the person chose to kill all 6 people?
Do you think that’s morally permissible? (You are actually changing the terms of the scenario though.)
A question from a previous semester:
About the trolley case where we are asked whether we should save the five by switching the train, causing the death of one, or the other way round. In the first place, is this question even valid when you know this is a false dilemma? it’s like in reality, just a hypothetical question is not going to take place, and that the probability of a third possible way to stop the train is so high.
There is a short and a longer answer about these ‘trolley problems’ (or ‘lifeboat problems’). The short answer is this–of course real life is usually much more complicated and normally, more than two options exist. But this is to evade the force of the thought experiment–which, incidentally, is not to persuade you to go for one option rather than another. These thought experiments are just ways in which we force ourselves to make clear our own moral beliefs via a highly constrained scenario. If the choices were really down to just these two–essentially, don’t pull the level (the trolley runs over the five), or pull the level (the trolley runs over the one)–what do we think is the right thing to do? And pairs of such experiments which happen to elicit a differential result from us force us to think harder about what the moral difference is between them.
So “I” agree that it’s at least permissible to pull the level, but it’s not permissible to harvest the organs from the one healthy person to save the five. But aren’t they all just “five lives vs. one life” cases? Shouldn’t I be consistent? If I don’t think they should result in the same answer, then maybe I need to find a way to explain the moral difference between them–maybe there is a difference between “letting” someone die, as opposed to “actively killing” him? And so on. Thank goodness these scenarios don’t normally happen in real life (though analogues do occur). Long and short–in my books, ‘trolley problems’ should not be taken as proofs of theories or guides to action in the real world; they are just philosophical heuristics for helping us focus our thinking.
The longer answer is, well, much longer and a bit too much for the purposes of this class. I’d rather that people don’t get bogged down–for the purposes of this class. The gist is this: there are very smart philosophers out there who think that trolley problems are way overused, that they distort our thinking, and entirely unfit for giving us any real insight into what really matters. Some versions of the complaint are not, strictly speaking, in conflict with my short answer, because they are targeting more extravagant uses of these thought experiments rather than the more modest view I put forward above. Others are more strident in their criticisms. So long and short–it’s a philosophical dispute in its own right. In 2002, Derek Parfit gave the Tanner Lectures on Human Values at UC Berkeley.
Being in attendance as a graduate student pursuing my PhD, I remember both being very impressed (and entertained) by Allen Wood’s response during the discussion, which was highly critical of the way trolley problems were used in Parfit’s talk, and in modern moral more generally; but also rather dissatisfied, as I thought he was being a bit unfair and didn’t really grapple with more charitable conceptions of what these thought experiments are meant to do. The website for the lecture is here. Wood’s response is from around 2:10 to 22:42 in the “Seminar and Discussion”. Later on during the general discussion, there was also an exchange between Allen Wood, Derek Parfit, Susan Wolf and Samuel Scheffler from 1:56:51 to 2:11:20 on the specific issue. (Ten years later, Francis Kamm, who uses trolley problems extensively in her own writings, gave her Tanner Lectures.)