Ok, here goes the last of the lot. Click through to see…

 

  • Question 1

Option B (“Bern only”); most of you (90%) got this.

Joan is wrong–those zombies are not Sims in the sense required for Bostrom’s argument. Both because we don’t have this technology, and because… they are supposed to be zombies, right? (The zombies in the game have artificial intelligence, but they don’t have a simulated mental life with (simulated) conscious experiences.)

Bern is right: Sims (by definition) have simulated consciousness (otherwise they won’t be Sims in the requisite sense). So if we have reason to believe that simulated beings can never be conscious, then we have reason to think that, contrary to Bostrom’s argument, we aren’t already living in a simulation, i.e., we have no reason to believe that we might already be Sims living in a simulation.

Expanded: Philip is wrong. If simulated consciousness is not feasible, sims in the requisite sense cannot exist, and we are not sims. Can non-conscious AIs still “worry about Bostrom’s Argument” for the reason that they might be (non-conscious) AIs? In principle, I can imagine a way to talk about non-conscious AIs “worrying” (strictly in a sense that doesn’t involve any conscious experience) that they are AIs. But whatever they are doing, they aren’t worrying about whether they are sims in the context of Bostrom’s argument–because that requires them to have a “human type conscious experience” and they can’t tell whether it is natural or artificially generated. 

 

  • Question 2

Option C (“Exactly 2”); most of you (90%) got this. Update: I’ll accept Option B (“Exactly 1”) as well.

Abe is correct because the truth of Mind-Body Reductionism (and Reductive Physicalism) is not a necessary condition for the truth of substrate independence. In principle, consciousness can still be instantiated on physical substrates other than carbon‐based biological neural networks inside a cranium even if all the facts about our conscious experiences are not reducible to facts about the physical characteristics of our bodies/brains.

On the other hand, Cain is wrong. Even if I accept that the mental supervenes on the physical, I can still hold that consciousness can only be instantiated in carbon‐based biological neural networks, i.e., they can’t be instantiated using merely computer hardware and software.

Zoey is correct just by the definition of substrate independence. Notice that the first part of what Zoey said is basically derived from the negation of something that Bostrom said to characterize substrate independence (“It is not an essential property of consciousness that it is implemented on carbon‐based biological neural networks inside a cranium”). In addition, an implication of the falsity of substrate independence is that Sims (“observers with human-type experiences that are, in fact, simulated consciousness”) cannot exist.

Update: Ok, a student pointed out an issue with what Zoey said that I thought makes sense. To recall, what she said was:

If substrate independence is false, then it is an essential property of consciousness that it is implemented on carbon‐based biological neural networks inside a cranium

For Zoey’s statement to come out correct, we have to read Bostrom’s “It is not an essential property of consciousness that it is implemented on carbon‐based biological neural networks inside a cranium” as a necessary and sufficient condition for Substrate Independence, especially a sufficient condition. Overall, I do think that Bostrom meant it that way–he was providing an explication of Substrate Independence. But nonetheless, there is something very odd about this, especially the “sufficient condition” part–it implies that if consciousness can be implemented on something that doesn’t have a cranium (the top part of a skull), then Substrate Independence is true. So if we discover that there can be such a thing as an invertebrate that has enough of that carbon based neural network, but no cranium, then we have Substrate Independence already. Conversely, if Substrate Independence is false, then a conscious invertebrate is impossible. I’m not entirely sure what’s going on but I think Bostrom is basically basing his whole discussion around what’s needed if “a human type consciousness” can be independent of our assumed physical substrate–which will presumably include a carbon based neural network in a cranium. Now, if we take Zoey’s statement as spoken purely from Bostrom’s point of view, then we should accept what she says as true. But taken on its own, I can see why some might be unwilling to accept it as well. For this reason, I will count those who selected Option B (“Exactly 1”) as well. This will only be reflected in Gradebook.

 

  • Question 3

Option B (“Cain only”); I’m surprised by this but only 22% got this; the majority selected Option D (“None of the above”).

Abe is wrong because Bostrom explicitly states that “convergence on an ethical view of the immorality of running ancestor‐simulations is not enough: it must be combined with convergence on a civilization‐wide social structure that enables activities considered immoral to be effectively banned” (Bostrom, p. 11).

Zoey is wrong because the fact that most posthuman civilisations find the scientific value of ancestor-simulations negligible is a possible–but not necessary–reason for why proposition (2) may be true. Almost all of you got the above. Update: You can see this with reference to the longer explanation below for why Cain is right. 

Cain is right because Bostrom claims (also p. 11) that “[o]ne conclusion that follows from (2) is that posthuman societies will be very different from human societies: they will not contain relatively wealthy independent agents who have the full gamut of human‐like desires and are free to act on them” (ibid.). In other words, the truth of (2) is a sufficient condition for posthuman civilizations to not contain such agents. But that’s just another way of saying–the fact that these agents do not exist in posthuman civilizations is a necessary condition for the truth of (2).

Additional: I think many of you were distracted by the fact Bostrom had the sentence beginning “One conclusion that follows from (2)…” in the second complete paragraph on p. 11, rather than insert a paragraph break. But (logically speaking) that sentence actually sums up the first two complete paragraphs on p. 11, closing a discussion that began at the bottom of p. 10. In fact, if you look at the beginning of this discussion, you see that the conclusion statement was already anticipated at the beginning:

…virtually all posthuman civilizations lack individuals who have sufficient resources and interest to run ancestor simulations; or else they have reliably enforced laws that prevent such individuals from acting on their desires. (pp. 10-11)

One conclusion that follows from (2) is that posthuman societies… will not contain relatively wealthy independent agents who have the full gamut of human‐like desires and are free to act on them. (p. 11)

The first complete paragraph on p. 11 considers the scenario where there is a widespread ethical injunction against creating ancestor simulations in posthuman societies and the effective means to ban their creation. The second complete paragraph on p. 11 considers the scenario where most posthumans don’t have the desire to run such simulations. Note that both scenarios imply a radical change to human psychology/general beliefs compared to the present (according to Bostrom anyway).

The conclusion statement pulls everything together because whether either scenario is true, the outcome is that the wealthy individuals in posthuman societies will either (a) accept an ethical prohibition against running ancestor simulations and so either don’t desire to do so or are “not free” to do so (they stop themselves), or (b) think it’s permissible but don’t desire to run such simulations, or (c) if they do desire to run such simulations (whether they also think doing so is ethical), aren’t free to do so–because the thing is effectively banned.

 

  • Question 4

Option D (“Cain only”); most of you (91%) got this.

Abe is incorrect because Bostrom is merely deducing the relationship between the fraction of all human-level civilizations that survive to reach a posthuman age and some of the other variables he defines. Hence, Bostrom need not know the exact value of that fraction for his argument to be sound.  

Cain is correct (and Zoey is thus incorrect) because if the assumption that the number of simulations run by an interested posthuman civilizations is very large is false, then even if proposition (1) and (2) are false, it need not follow that there are proportionately many, many more Sims than there are Reals. Instead, if each interested posthuman civilization only runs a few ancestor simulations, then Sims might not hugely outnumber Reals. If Sims don’t hugely outnumber Reals, then our credence for believing that we’re Sims is not approximately 1.

Zoey is wrong because what she says will just imply that Proposition (1) is true, and if (1) is true, the argument is sound.

 

  • Question 5

Option A (“Will only”); most of you (93%) got this.

Will is correct since the truth of the conclusion is a necessary condition for the argument to be sound.

Gene is wrong. We don’t even know if the conclusion is false when we know that the argument is unsound. But let’s say that the conclusion is false–it’s not true that the three propositions can’t be false all at once. This doesn’t say anything about whether it’s possible for them to be true all at once. More generally, let’s say we know that the argument is unsound. From this, all we can deduce is that we now know that either the argument has a false premise or it is invalid. That doesn’t give us any way to deduce whether it’s possible for (1), (2), and (3) to all be true at once either. So it’s not true that “we certainly also know that it is possible for all three propositions to be true.”

Dave is wrong. The argument doesn’t assume that Proposition (2) or (3) are true. What the premise says is that Between Propositions (1), (2), (3), if any two are false, the third is true. In other worlds, if we break apart the Premise into components, it will say:

Premise 1A: If (1) and (2) are both false, then (3) is true.

Premise 1B: If (1) and (3) are both false, then (2) is true.

Premise 1C: If (2) and (3) are both false, then (1) is true.

The idea that (1) is true while (2), (3) are both false does not contradict the above at all.

 

  • Question 6

Option B (“Zoey only”); most of you (89%) got this.

Abe is incorrect because Bostrom anticipates this objection on p. 5 of his paper. He argues that a posthuman civilization would not have to simulate the whole universe to make the simulation convincing. Instead, the microscopic world could just be filled in ad-hoc.

Cain is incorrect because the conclusion of Bostrom’s argument only requires that one of his three propositions be true. Hence, the lack of interest in running ancestor simulations is consistent with Bostrom’s conclusion.

Zoey is correct because it is plausible that the human race will exist for a prolonged period of time and become very technologically advanced but not reach a posthuman state even if it is impossible to simulate human consciousness in a computer.

 

  • Question 7

Options B (“Lena”) and D (“Tess”). The majority (65%) got this.

Dave has the incorrect understanding because Dualism and Physicalism are not jointly exhaustive alternatives–remember that Dual Aspect Theory is a third option. Almost all of you (95%) got this. Gene has the incorrect understanding because they surely have (strong) evidence that Tess is a fan of Japanese culture! Most of you (85%) got this.

Lena and Tess have the correct understanding because the Principle of Indifference applies when the options presented are mutually exclusive and jointly exhaustive, and it must also be the case that we do not have evidence which points us in the direction of any of the options. Almost all of you (98%) got Lena, but slightly fewer (75%) got Tess. So maybe let me go through for Tess. What she said was:

Remember when I showed anime to all of you? Lena thought it was good but the rest thought it was not. We were in a genuine disputation (as Mozi would have called it), but we eventually came to the conclusion that there was no evidence favoring either view. According to the Principle of Indifference, we should assign a probability equal to 0.5 that anime is good.

So basically, we have two positions–“anime is good” (Lena thought this), “anime is not good” (“the rest”, i.e., Dave, Gene, and Will thought this). Given that the two sides form a genuine disposition, it also follows that the two positions are mutually exclusive (can’t both be true) and jointly exhaustive (can’t both be false). You can figure out the rest.

 

  • Question 8

Option C (“Tess”) only. 48% got this.

To recap, the argument goes:

Premise: Between the five TAs, if any four of them were not responsible for this, then the fifth one is.

Conclusion: It can’t be that none of the TAs are responsible for this.

Gene is wrong because the conclusion of Dave’s argument only implies that it cannot be the case that none of the TAs were responsible, i.e., at least one of the TAs is responsible for it. But this is compatible with, e.g., 5 of them being responsible, or 4, or 3, or 2. Most of you (93%) got this.

Dave is wrong because it can still be the case that our simulators allow us to think that we might be in a simulation, while still being in a simulation. There is nothing in the Webinar or in Bostrom’s reading that supports Dave’s claim. We probably won’t find out we’re in a simulation unless it is revealed to us (slide #52), but this isn’t the same as us not being able to entertain the idea that we might be in a simulation, or even being so convinced. Almost all of you (99%) got this.

Tess is right because Bostrom (p. 12) says that we can still have reasons to act morally even if we are in a simulation. There is still the possibility of an afterlife where we are judged for our actions, possibly by our simulators. Almost all of you (97%) got this.

This leaves Lena, who is wrong. What she said was: “If Prof Lloyd was lying and he was actually the one behind all of this all along, then the argument is invalid!” But whether or not Prof Lloyd was responsible for the thing does not affect whether or not the conclusion logically follows from its premise–whether the argument is valid. Given suitable assumptions, the idea that Prof Lloyd was was responsible for the thing implies that the premise and the conclusion of the argument is false, and so the argument is unsound. (Suppose at least one person has to be responsible for the thing, and Prof Lloyd is responsible. It follows that between the five TAs, even if any four of them were not responsible, the fifth one need not be as well.) But just because the argument is unsound, it doesn’t follow that it’s invalid. Around half of you were distracted by this option.