The overall results improved from the last quiz, from 3.71 to 4.15–good job! There’s still a lot of headroom to grow, of course. But I think many of you are beginning to acclimatize. Click through to see…

 

  • Question 1

Option D (“Neither Gene, nor Lena, nor Tess are right”). 53% chose the right answer; most of the rest went for either Option A or Option B (with very few choosing Option C).

Given the terms of the question, this is what we know about dream-Dave, the president of the giant pharmaceutical company–he believes that by having his head scientists fabricate the results of the NORONA animal trials, he will do something that will potentially save the lives of millions. But this isn’t enough information for us to deduce that for Dave, doing so will bring about the best overall outcome for the world, or the best overall happiness of the world. The world is bigger than those ten million.

And even if–for the sake of the argument–he does believe those things, we still don’t know if he believes that acting so as to bring about the best overall outcome is our only duty. For all we care, maybe he acted as he did because he believes that he has a moral duty to be rambunctious and defy official regulations, which could make him a (rather weird) Deontologist, but still a Deontologist. Or maybe because he wants to act out of the virtue of courage–a quality he considers a defining trait of an ideal moral agent, which could make him a Virtue Ethicist.

Similar reasoning will apply to the case of Deontology–just as there isn’t enough information to conclude that dream-Dave is either a Consequentialist or a Utilitarian, there isn’t enough information to conclude that dream Dave is a Deontologist. Consequently, Gene, Lena, and Tess’ confidence are all misplaced.

By the way, this question is inspired by true events.

General coda: Someone who is a Consequentialist believes something–that the only moral duty we have is to act so as to bring about the best overall outcome for the world. Do we know how he would act? Not really, given plausible assumptions, we know how he thinks he should act if he is to be consistent with the theory he believes. But unless no one ever goes against their own moral beliefs (you should not believe any such thing, by the way), we can’t deduce how he would behave. You can generalize this for all moral theories that someone might believe in. Same problems with the converse. We can’t always securely deduce what moral theory a person believes in by looking at how he behaves, not even if we throw in some of the motives for his actions.

 

  • Question 2

Option D (“Neither Option A nor Option B is true”). 73% of you got this.

Neither Will nor Gene had presented arguments against Consequentialism or Deontology. Will stated a position that a Deontologist might take. Even though Deontology is a rival moral theory to Consequentialism, a statement of Deontology isn’t an argument against Consequentialism. Similarly, Gene stated a position that a Consequentialist might take. Although he explicitly said that he disagreed with Will and that “there is no need to consider categorical duties”, what he followed up with was not an argument against Deontology as much as it was a statement of the position of a Consequentialist position (specifically, one that morally appraised actions based on their expected outcomes/consequences). Good to see that most of you saw this.

Some wrote in to ask for more explanation so here goes. Both Will and Gene made arguments, no doubt about it. In fact, we can even say that they made arguments against something that the other said. But this doesn’t mean they made arguments against each other’s moral theories (hence the hint). Will’s argument is something like this:

Premise 1: We have a categorical duty never to intentionally cause harm to human beings.

Premise 2: Running human trials of NORONA before we are even sure that it is safe for living organisms is equivalent to intentionally harming the human beings whom the potential vaccine will be tested on.

Conclusion: We should neither fabricate the animal trials results nor carry on with human trials at this point!

In other words, Will ended his argument with a prescription, one that is generated from his Deontology. That prescription is contrary to the counter party’s prescription. But it doesn’t constitute an attack on Gene’s Consequentialism for two reasons. First, Will already assumed an anti-Consequentialist position in Premise 2. Second, the conclusion, even if true, doesn’t actually imply that Consequentialism is false.

Gene’s argument is more messy–the ordering of the sentences can be misleading:

Premise 1: Even though fabricating the animal trial results and proceeding to human trials risks adverse side-effects on living organisms, nonetheless, the expected value of doing so is positive for the world.

Premise 2: There is no need to consider categorical duties / the only thing to consider is the expected value of our actions for the world.

Conclusion: We should fabricate the animal trials results and carry on with human trials now!

The tricky one is “There is no need to consider categorical duties”–you might be tempted to see this as the conclusion. But think about how Gene managed to go from the Premise to the Conclusion. Actually, that line is another premise–Gene needs it to get us from Premise 1 to Conclusion. In any case, what we have is parallel to the previous case–Gene ended his argument with a prescription, one that is generated from his Consequentialism. That prescription is contrary to the counter party’s prescription. But like the other case, it doesn’t constitute an attack on Will’s Deontology for two reasons. First, Gene already assumed an anti-Consequentialist position in Premise 2. Second, the conclusion, even if true, doesn’t actually imply that Deontology is false.

A prescription that follows from a particular moral theory will say, e.g., do this particular thing, don’t do this particular thing. What’s interesting here is that both parties’ specific prescriptions–taken by themselves–could just as well have been generated by each other’s theory. After all, the world could work out such that, given the balance of outcomes, the right thing to do is X, which is also what the Deontological theory says we should do. And likewise, when we act according to a categorical duty, it might still be the case that we are doing that which has the most expect value for the world–even thought that’s not what motivated us. Once you see this, you can see why the two arguing against each other’s specific prescriptions (fabricate the results/proceed to human trials, don’t) doesn’t get at each other’s moral theories.

 

  • Question 3

Options B, C, and D. Let’s go through each option in turn.

Option A says: “A norm saying that the pros and cons to the world’s welfare determine whether following one of the existing categorical norms is the morally right thing to do”. Imagine someone who is otherwise an Deontologist. She subscribes to a list of categorical norms. Now add what’s in the option statement to her list of moral beliefs–basically, her original list of categorical norms are no longer categorical… I’m happy to see that very, very few of you picked this option.

Option B says: “A norm laying out the logical priority between the other norms such that, if there should be a conflict, one’s duty is to act in accordance with the more important norm.” Unlike the above, this addition won’t convert our Deontologist’s categorical norms into something else. I’m happy to see that almost all of you (94%) saw this–good!

Option C says: “A norm saying that when faced with a conflict between norms, one has a permission to act according to either.” This is similar to the previous in that it doesn’t introduce anything that makes the existing norms non-categorical. Most of you (80%) saw this too–good!

Option C is the harder one. It says, “A descriptive theory, saying that as a matter of fact, acting in accordance with one’s moral duty as they are laid out in the categorical norms will never make the world a worse off place.” Only 29% picked this. But adding this is like adding a theory about whether animals can feel pain to a Hedonic Utilitarian position. So likewise, adding what the option says to the Deontologist’s beliefs doesn’t change the nature of the norms she already subscribes to, nor does it add a new, non-categorical norm. So this too count.

 

  • Question 4

Options B, and D. Almost all of you (>92%) were able to see that both are correct–but some of you incorrectly picked Option C as well. For this question, you only need to focus on the final bit–”it would be morally wrong for you to go to work with these symptoms”. How Lena arrived at this conclusion, whether she was right to conclude this way, and what moral theory she holds isn’t important. The question is only about whether you understand the “terms of evaluation” taught in W03. Since that action is wrong, it follows that it’s impermissible and Dave would be morally blameworthy for doing it. Option A is incorrect because not being blameworthy for an action is not the same as being praiseworthy for it (good to see that almost everyone got this). Option C is not correct because Lena did not say anything about anything being supererogatory–something morally good and so praiseworthy, but not doing is not blameworthy. In fact, given what she said–“Given that one ought to always act to maximize the overall benefit to the world, it would be morally wrong for you to go to work with these symptoms”–which implies that Option D is correct, it would also follow that Option C is wrong.

 

  • Question 5

Option D (“Neither Dave, nor Will, nor Lena is definitely right”). Around 40% got this, with 52% distracted by Option C (“Will and Lena”).

Whether Bern did the right thing or not does not depend on what she believes–it depends on what the correct moral theory is. This means that neither Dave nor Will are right. Whether Bern is a Deontologist or a Consequentialist isn’t the issue since that only tells us which theory she believes is true.

Lena is wrong because we don’t have enough information. Sure, Josh went on to live a happy and fulfilling life. But we don’t know whether the overall happiness of the world was improved or made worse by Bern’s action. Maybe Josh led a happy life, or maybe he pursued a PhD in philosophy and was constantly gripped by existential crises, we just don’t know. In fact, even if Josh’s happiness improved, maybe because being constantly gripped by philosophy induced existential crises brought him happiness, the scenario is still compatible with a net loss for the world’s overall happiness–because Josh’s family members are very unhappy with his choices. And so on. So all in all, there isn’t enough information for us to conclude that Lena is “definitely right”.

Do see this, if you haven’t.

 

  • Question 6

Option A (“Implementing the prototype cure was the morally right thing to do”). Most of you (91%) got this–good job!

Remember that the question is asking which claim would Josh disagree with. Option A follows from Josh’s Deontology. The preamble already states that Josh believes that it is wrong to intentionally do things that kill people no matter the circumstances, and 10% of those who received the cure died. Conversely, Option B is the wrong answer for parallel reasons. Options C and D are both wrong because they both follow from the definitions of Utilitarianism and Deontology respectively, so Josh will have to agree with them, given that he is operating by the concepts introduced in our class. Note that Josh does not need to be a Utilitarian to agree with C.

 

  • Question 7

Option C (“I and II only”). Around 27% of you got it. Most of the rest distracted by Options A (“I, II and III only”) and B (“II, III and IV only”). The four statements are:

I. Abe did not correctly represent Prof. Lloyd’s concern regarding Deontology.

II. Abe has reasoned incorrectly to the conclusion that consequences determine the moral evaluation of actions even for the Deontologist.

III. Given what he said in the Webinar, Prof Lloyd has to agree with Dave’s defence of Deontology in response to Abe’s objection.

IV. If the only true moral rules are those which, if complied with, will generally lead to the best outcomes for the world, then Dave is wrong to say that consequences of actions don’t matter to the moral evaluation of actions for the Deontologist.

I. is true because the concern in the Warlord discussion in W03 is not about the immorality of killing many more innocent people. After all, a Deontologist isn’t supposed to say that outcomes in the world matter to moral evaluation–the action (with intention) and its relation to categorical norms is the only thing that is supposed to matter. The concern, rather, is that philosophically, if Deontology is the correct theory of right and wrong, it means that we have the moral permission to make it such that a norm is violated many more times by refusing to violate the same norm and this seems counterintuitive or irrational or paradoxical…

II. is true because the Deontologist could make the same response without assuming that consequences matters for moral evaluation. For example, there could be a moral duty for us to break the least number of moral rules as far as possible. Then, the same Deontologist response could be made under the assumption that the moral status of an action is all about how it relates to the categorical norms and specifically in this case, the norm to break the least number of rules as far as possible. At the end of the day, it’s only the relation of the action to the norm that matters to the Deontologist. Not sure if you noticed but II. is redundant since it appears in all the options 😀 In any case…

III. could be false because even if Prof. Lloyd were a Deontologist himself, he might think that having the moral permission to kill many more villagers is too damning a bullet for any Deontologist to bite. He might think that we should give Deontology up for this reason or respond to the objection differently. The “has to” is too strong. From the information provided, we simply don’t know…

IV. is not true because even if the world is as described, it doesn’t mean that the Deontologist has stopped being a Deontologist. It just happens that when we comply with categorical norms, happily enough, the world cooperates and the best outcomes ensues. See also explanation for Question 3 Option C above.

Yes, this one is more complicated.

 

  • Question 8

Option D (“It is possible for a Hedonic Utilitarian to end up (sincerely) evaluating Barry Bodoh as having done something morally wrong”). Around 31% picked this. Most were distracted by Options A and B.

Option A (“The BoBari method is actually Utilitarianism in disguise”) and Option B (“Since application of the BoBari method has led to more charitable donations, practitioners of the BoBari method are doing what is morally right if some form of Actual Outcome Consequentialism is true”) are both false since the BoBari method it does not actually tell us how to evaluate moral action. And even if you think it does, we don’t know if having people keep only those items which “spark joy”, or for that matter, there being more charity donations, mean the best overall outcome for the world. For all we know, more charity donations could mean more environmental degradation, and more people keeping only things which “spark joy” leads to an overall decrease of happiness… Almost no one picked Option C–but it’s also not viable given that, as stated earlier, we don’t know how the BoBari method evaluates moral action, if it even has an opinion about that at all. Given the above, you can deduce that Option D is correct, not just in the sense that it’s the only option left, but also because there are conceivably circumstances in the world such that the overall balance of pleasure/pain is negatively affected by people practicing the method.