I will be brief for most of the questions (additions in blue). And, as announced, I won’t be posting the questions since you were allowed to take them out of the exam hall (so please don’t email me to ask for them). The 75th, 50th, and 25th percentile scores are 17, 15, 12, respectively. Yes, this one turns out to be harder. But please don’t fret–this is where the NUS curve is in your favor. A very few forms had incorrect matriculation numbers, but I was able to figure out all of them and account for all students who took the exam. I expect to finalize all marks by no later than Wednesday/Thursday. Click through to see…
- Question 1
Option C (31%; many distracted by B and D). There isn’t not enough information to tell if wither Will or Lena subscribe to Consequentialism or Deontology (so not Options A or B). Nonetheless, what they say is consistent with Utilitarianism—so they might subscribe to that theory, even though they might also not. On the other hand, if, given what they say, they (definitely) subscribes to a form of Utilitarianism, or they (definitely) rejects Utilitarianism, then you shouldn’t pick Option C–in fact this what rejecting Option C to go for Option D means.
- Question 2
Option D (16%; majority distracted by Option C). Not Option A since he said: “…we have a moral duty to do that which maximizes the world’s happiness.” Not Option B or C because the information provided doesn’t entail that Utilitarianism is the true moral theory (cf. Question 8 below) and so Dave has the stated moral duty.
- Question 3
Option A (56%; many distracted by Option B). Basic terms of moral appraisal stuff. Gene’s argument is valid (it’s just a modus tollens) but unsound (“if we aren’t morally wrong to hog this space, we would be praiseworthy” is the false premise).
- Question 4
Option B (55%; rest spread out between the remaining options). Not Option A since the Puppy argument can still be sound even if cocomone is an impossible substance as long as it remains the case for it to be morally wrong for us to eat at Fred’s, had cocomone existed and he really did what he did. In any case, even if both the Puppy Argument and the Utilitarian Argument are unsound, it doesn’t follow that will is morally ok to drink his LICHs. For a similar reason, also not Option C since just because the conclusion of an argument is a true statement, it doesn’t follow that a particular argument that terminates in that conclusion is sound.
- Question 5
Option C (50%; rest spread out mainly between Options A and D). Recall that as long as non-rational animals remain moral patients even if they aren’t moral agents, the rationality objection won’t work (this was both in the reading and in the lecture).
- Question 6
Option D (49%; most of the others chose Option A). Basically, given his stance, Will should hold that both the Puppy Argument and the Utilitarian Argument are unsound, and since the Utilitarian Argument is valid, he should believe that one of the premises is false.
- Question 7
Option A (19%; Option A; many distracted by Option D). The information provided implies that Mr Fred Chang subscribes to the Drowning Child Argument. However, we don’t know if, like Peter Singer, some form of Utilitarianism, or either the Strong or Moderate Principle, is behind his agreement with the premise that we have a moral obligation to save a drowning child in a muddy pond even if it means getting our clothes muddy. Note that Option A (“Mr Fred Chang may subscribe to Utilitarianism, but he might not”) isn’t a tautology (i.e., true no matter what)—it is false if turns out that given just the information provided, Mr Fred Chang does (definitely) subscribe to Utilitarianism, or that he (definitely) doesn’t.
- Question 8
Option D (65%). If the (Utilitarian) value of the drowning child’s life is less than what it takes to save her, then, Mr Chan’s argument has a false premise and so is unsound. If the (Utilitarian) value of the starving child’s life is less than what it takes to save him, then, the conclusion to Mr Chan’s argument is false and so his argument has to be unsound. In context, “donating much more of one’s wealth to famine relief” = “do something to save the starving child in the poor country”. Otherwise, the rest of his argument would be disconnected (and invalid, so unsound…). (Note the addendum–“Mr Chan” in Options A and B should be “Mr Chang”.)
- Question 9
Option C (6%; most chose either Option A or Option B). Basically, most of you were hoodwinked by Will. What he said was–“Our moral duty is to do that which maximizes the world’s happiness; and a person’s expected lifetime income is an indicator of the net happiness that he or she is expected to contribute to the world.” In other words, Will puts forward an actual outcome Utilitarianism while saying that expected lifetime income tells us something about expected outcomes–he failed to provide a set of conditions under which they have a moral duty to save A rather than B. Lena, on the other hand, puts forward an a view that references expected outcomes, and the information provided are all about expected outcomes. The distinction came up in the lecture on Singer.
- Question 10
Option B (76%). Tess gave explicit consent, though conditional upon Lena’s action.
- Question 11
Option A (49%; most of the rest chose either Options B or D). The parties are trying to get into power, which should already make you suspicious. In any case, even if all their campaign promises are enacted, there will still be a sovereign wealth fund and a national budget—hardly an anarcho-capitalist society, so not Option B. Given Huemer’s Philosophical Anarchism, he wouldn’t say that the party which comes to power has political authority, even if it did so in an open and fair election, so not Option C.
- Question 12
Option B (92%). Statism (and Anarchism) are prescriptive theories about whether Political Authority exists, rather than descriptive theories about whether people will feel a sense of moral obligation to obey the government or obey the government.
- Question 13
Option A (61%; most of the rest are between Options C and D). See the lecture’s definition of moral responsibility. Options B and C are wrong as Will is morally responsible only if he deserves that censure; it’s not just a matter of whether people do censure him, or has an indefinite good reason to do so.
- Question 14
Option C (35%; the rest spread between the remaining options). If Will is morally responsible for his belch, then both the Standard (Dilemma) Argument and the Basic Argument are unsound. But recall that the three positions in Option A are basically defined in terms of the rejection of some aspect of the Standard (Dilemma) Argument, so one of them is true.
- Question 15
Option D (67%). If both arguments are sound, the Basic Argument is sound; hence the statement in Option B is correct. If we are morally responsible for at least some of our actions and decisions, then both arguments are unsound, which means that at least one of them is unsound; hence the statement in Option C is also correct.
- Question 16
Option D (30%; many chose Option C). Basically, neither is a successful example of an inconsistent set of statements. The easiest way to figure out Gene’s statements is to try diagramming them–two rows (first year, second-year-and-above), three columns (NUS-FASS, NUS-elsewhere, exchange-elsewhere), so six boxed. But one of them–first year, exchange-elsewhere–is empty. Dave’s statements don’t imply “every student in GET1029 taking the exam this semester selects Option D for Question 16 in the exam”. (And as a matter of fact, not every student selected Option D for Question 16–70% didn’t)
- Question 17
Option A (39%; many chose Option B, but also the others). The three propositions—God is omnipotent; God is wholly good; and yet evil exists—plus Mackie’s two so called “quasi-logical rules” together imply an inconsistency. Tess is right to point out that if the two rules aren’t logically necessary, then, contrary to what Mackie says, there isn’t an inconsistency between the initial three propositions. But since the complete set of five propositions do imply an inconsistency, the Classical Theist who believes that evil exists (i.e., someone who believes all three initial propositions) will need to reject at least one of the two rules to avoid any inconsistency. So if the two rules are true, Dave is right to say that this Theist holds beliefs that imply a falsehood.
- Question 18
Option B (27%; the rest spread between the other options). The difference between Options B and C is that for the first, “having false beliefs is not itself an evil” is a necessary condition for Guru Robertino to have an adequate solution to the LPOE; while in the second, it is a sufficient condition. The latter is wrong unless you can guarantee that there aren’t other things mentioned in Guru Robertino’s teachings that are, in fact, evils (e.g., the “many problems in the world”).
- Question 19
Option D (53%; many went for Option B). Truth is neither a necessary nor sufficient condition for justification.
- Question 20
Option B (26%). Abe is wrong since nothing in the definition of a disputation says that the subject matter must be of a certain kind so long as the logical conditions are satisfied. Jerry is wrong since the Argument from Disputation, if sound, shows that neither part is justified, not that there isn’t a right answer (truth). Bern isn’t saying that Phosphorus will be justified in his belief. She’s saying that if Zhuangzi’s argument is sound, then Phosphorus will be justified in his belief only if his reasons don’t beg the question against Hesperus. But given the Tetralemma, those reasons are going to beg the question against Hesperus… so he won’t be justified in his belief.
- Question 21
Option D (32%; many distracted by Option B, and also Option A). Abe is wrong because Epistemic Permissivism is about whether epistemic peers can have opposing but equally justified beliefs in the domain, not about whether they “could have” come to opposing conclusions (“could have come to a certain conclusion” is not the same as “could have rationally come to a certain conclusion”). Besides, we don’t even know if Phosphorus and Hesperus are epistemic peers. Bern is wrong because Epistemic Uniqueness is not about whether peers and non-peers “would have” come to the same or different conclusions, but whether there is a uniquely justified belief for the domain.
- Question 22
Option D (39%; most of the rest between Options A and C). Both Macheim and Flanck espouse Antirealism, but only Flanck espouse Scientific Instrumentalism—note the way he talks about ‘truth’ (notice the scare-quotes, implying that he’s not being literal) that scientists need to care about as constituted by empirical adequacy. Note also that for Machiem, given what he says, he doesn’t espouse instrumentalism, i.e., what he says doesn’t entail instrumentalism. If the option had said something about him rejecting instrumentalism, things would be different. Though you don’t need to know this to figure out the answer to the question, it’s possible to read Macheim as espousing a form of metaphysical antirealism (he denies that there is a world that is there independently of how we think), in which case he is going to get his scientific antirealism for free.
- Question 23
Option A (68%). Supposed to be a straightforward rendition of the No Miracles Argument and the problem posed by empirically adequate theories that are known to be false in dialectic against each other. I hope you guys were not overthinking this… (Note the addendum–Option D should be “Neither Abe nor Bern”, though this should be clear from context.)
- Question 24
Option A (45%; most most of the rest went for Option D, followed by Option C). If we can’t really have knowledge about things that are observable but unobserved, then, scientific knowledge about the unobserved would not be possible. Not Option B since Flanck and Machiem can always bite the bullet and insist that scientific knowledge is only possible for the observed; which also means not Option C.
- Question 25
Option C (52%; most of the rest went for Option A, followed by Option D). Note that Bern is not saying that her description applies to the Phosphorus and Hesperus at the moment of their first awakening; it’s just a hypothetical. But that hypothetical breaks Mind-Body Supervenience, and so Reductive Physicalism. What Phosphorus’ report tells us is that at the moment of first consciousness, Phosphorus and Hesperus were observed to display different behaviors–one smiling, the other grimacing. By itself, this doesn’t tell us that they (definitely) had different emotions. In fact, even granting Reductive Physicalism, you can’t deduce that they have different emotions since what you have is a difference in physical characteristics, which can go with the same mental characteristics under supervenience. So Abe is wrong. (Note the addendum–Option D should be “Neither Abe nor Bern”, though this should be clear from context.)
- Question 26
Option C (74%; most of the rest went for Option A). Note that since one of them was smiling and the other was grimacing as if in pain, they weren’t physically identical, which means that it doesn’t even break Mind-Body Supervenience for them to have different emotions.
- Question 27
Option C (73%). Nagel was arguing against (Reductive) Physicalism; he wasn’t arguing against the possibility of Artificial Consciousness. (Note the addendum–the question is “Who is right?”)
- Question 28
Option D (77%; many of the rest went for Option A). The second commenter isn’t talking about the same kind of “indifference” as the sort relevant to the Indifference Principle–i.e., that the female classmate has no strong feelings either way. If the second commenter had gone on to say that, therefore, there is a 50-50 chance that she will end up with the poster, then the Indifference Principle would have been applied.
- Question 29
Option D (52%; most of the rest went for Option B, followed by Option C). Not Dave because we might well come to the justified belief that it is impossible to simulate consciousness even though, in truth, it is possible to simulate consciousness. Not Will because even if what he says is true, all it says is that any posthuman civilization that is running a large number of ancestor simulations will have access to more energy than is typically available to a M-size planet (such as earth).
- Question 30
Option C (75%; many of the rest went for Option B). Not Gene since they would just be having their exams in the virtual world, that’s all. Not Dave because–for all we know–the simulation program could be non-deterministic (e.g., the simulators are Libertarians with respect to the Standard Argument and found a way to code free will into their sims). (Note the addendum–the question is “Who is right?”)