Update: New question on Virtual vs Simulated Life, Substrate Independence vs. the “Brain-Computer Assumption” added to the end.
Here we go…
Isn’t AI a developing form of simulated consciousness?
Please review Slides #16-17, 19, 48. Make sure you are clear about the distinction between intelligence and consciousness, and therefore, the distinction between AI and AC. The confusing part comes in because some researchers use the term “AI” when they really mean “AC”. Or alternatively, “Strong AI” (which includes AC), as opposed to “Weak AI” (which doesn’t). And people don’t all agree on using the terms the same way. But whatever else, just be clear that for the purposes of our class, we are distinguishing between talking about AI and talking about AC, just as we distinguished between talking about intelligence and talking about consciousness (see also W10 Slide #25). The above is important else you don’t get the punchline of the Simulation Argument.
Why would the posthumans be so free as to give us such boring simulated lives? The video games you provided are all very exciting :c
If we are living inside a simulation then, what is the purpose of having this simulation?
Just as we don’t create games for the NPC–we create games for us–so likewise, there’s no reason why (if we are living simulated lives) our simulators wouldn’t want to do this. But otherwise, it’s best not to speculate too much about the motivations of the simulators, if they exist. If I’m a simulator trying to see the effects of a proposed certain worldwide climate change policy, or figuring out how history would have turned out given a small change, most of the simulated population would be living the usual boring lives as they were supposed to. The point is merely that even given our existing motivations, there is a plausible case that we would be interested to run ancestor simulations if we were capable. But still… I can’t resist linking to this old Onion news parody:
If we were to run a simulation of the past, would the player’s actions affect the course of history during the simulation and affect the future (in the simulation)? Causing a disconnect between real history and the simulation
If we can run an ancestor simulation, does it mean that determinism is true since everything that will happen is already simulated?
This is a use scenario that I would love to be able to actualize–run simulations to figure out historical counterfactuals. Would “The West” develop the way it did if there wasn’t a Roman Empire, or if the Roman Empire didn’t fall? Was the relatively early unification of “China” into bureaucratic state that persists through upheavals overdetermined, or chancey? What if Napoleon didn’t have a stomachache at the battle of Borodino? What if the Barisan Socialis had kept power in Singapore? etc., etc. The way to go about this is to basically go in and do a special reset of the “initial condition”, then run many instances of that scenario to see how things go. Since we are talking about billions of interacting agents, it’s like simulating the weather–there is going to be a lot of uncertainty, and no reason to believe that everything will turn out exactly the same each time we run the simulation.
If we live in a simulated world, will the problem of evil still be relevant? Will Theistic God still care about stimulated humans if the simulation is so real to the point where the simulated believe that they are real?
If we live in a simulated world–and God isn’t the simulator–then obviously, the POE won’t be relevant. However, we don’t know that we are living in a simulated world. Secondly, suppose we do live in a simulation, then our simulators basically take the place of “God”… And we can definitely pose questions about their goodness and power, etc. As this next question implies:
If we are living inside a simulation and I look ugly in life , can i blame the simulation for being run on poor low level hardware/simulator
is a below average brain the cause for why we are Flop-ping the quiz?
If we are a sim, then why do we need rest? seems like a waste of time and processing power to rest
You are assuming that the simulators didn’t deliberately simulate you a certain way, for their own purposes. Remember also that if we are living in an ancestor simulation, it’s actually rather likely that rather than ‘hand design’ each NPC (each of us), the simulators would have just left it to algorithm to generate us according to parameters (bell curve of intelligence, perseverance, looks, luck, etc.). When you rest, the processor doesn’t need to do much unless you dream. Also, keep in mind that this is an “ancestor simulation” we are talking about–a realistic simulation of the lives of people who lived before posthumanity.
10^33-10^36 seems a bit sketchy, wouldn’t the simulation need to simulate a lot of things in the environment too (animals, plants, wind, rain…) and not just human brains?
See Bostrom, pp. 4-5 for how he derived his numbers You don’t need to simulate everything–just enough for the simulated conscious agents not to notice any irregularities.
Can simulations give us free will?
It’s at least conceivable that, if we are simulated, our simulators built in an indeterministic element in our programming so that we are genuinely able to do other than what we did–that of some of our actions/decisions, the past state of the (simulated) universe does not uniquely determine whether we will or will not do/choose to do it.
If dual aspect theory or mind-body dualism is true, then the post-humans cannot simulate our consciousness right?
Can I say that dualism is true if the stimulators programmed souls into us?
wait, wouldnt the dualist/dual aspect theorist not believe in substrate independence and brain computer argument?
As I pointed out in the Webinar, this doesn’t really follow. All that is needed is that artificial consciousness can be created using computer software and hardware. But in principle, the consciousness that’s created may be either fully reducible to the physical processes, or based on the physical processes but distinctly mental phenomenon, or some kind of distinctly mental thing that reliably appears whenever a certain type of programming is executed on a suitable kind of hardware…
In principle, anyway. Otherwise, yes, the typical scientist/philosopher who believes that “substrate independence” is true is also a physicalist. Likewise, the typical Dualist or DATheorist would likely not be enthusiastic about substrate independence.
Note that even if you are a Dualist, you can totally believe that there’s a reliable ‘physical’ process for making things with consciousness come into existence. I understand it’s call “making babies” in some quarters.
Let’s say the posthuman sim goes on until we reach their level and can reverse engineer the sim. We’d have a way to turn the sim against them so they would have to prevent us from doing so, but if they interfere won’t the sim be inaccurate?
This seems extremely unlikely. What’s the best that the agents/bots in a simulation can figure out? The rules of their virtual world. Whatever they get to see/feel/think has to be things that the programming allows and enables them to…
But simulations involve the knowledge that we know/ have how would we/they simulate things we don’t know about?
Should they bother? Presumably, they need to simulate the things that are in the consciousness of the simulated beings, and just enough of everything that supports that.
how i do know if im the player or an NPC?? Is prof Loy an NPC?
Maybe NPCs and ‘Protagonists’ are real. Many of us are unexceptional, only a few rise to the top
If we are questioning whether we are stimulated or not, does that mean we can tell a little “from the inside”?
Let me put it this way–if you don’t already know that you are an avatar of a simulator, then you can’t know if you are an NPC or not… I don’t already know that I am an avatar of a simulator… so…
Is Bostrom begging the question by coming up with his own propositions just to support his conclusion?
As pointed out in the Webinar, it can look this way. And you are certainly not the first to make this observation. However, note that he isn’t assuming any of these propositions to be true. Rather, he is asserting a series of conditional claims–if (1) and (2) are false, then (3) is true, etc. And backing up those claims with a mathematical argument. This doesn’t mean that there literally aren’t any potentially controversial assumptions hiding somewhere, but whatever they are, they don’t consist in the bare idea of Propositions (1), (2), or (3).
But wouldn’t this assume a multitude of human civilizations, how would a human civilization arise again after it becomes extinct?
It’s human-type consciousness and human-level technological civilization. It doesn’t have to refer literally to exactly our civilization.
how can points 2 and 3 be true at the same time, don’t they contradict each other?
i dont get how some of these pairings/trios of trues are possible. if no civilization reaches post human how can it be true that we are surely living in a simulation? or if it is outlawed, how can we almost surely be living in a simulation
See the last part of the additional handout “Nick Bostrom’s Mathematical Argument (Loy Oct 2020)”. I was surprised when I figured this out too, but the mathematics imply that those combinations are possible. The only combination that’s ruled out is, exactly as the argument says, the one where all three propositions are false.
How was 1/7 calculated? I’m still confused by the maths
This is referring to the chart on Slide #44. When you have three variables each of which admits of either a “yes” or “no” possibility, you have a total of 2^3 = 8 possible combinations. The last one–where all three propositions are false–is ruled out by Bostrom’s argument. But since the others are still possible, we have a total of 7 possibilities. And since we don’t know anything that will make any one of them more or less likely than the others, the Principle of Indifference tells us to assign 1/7 as the probability to each combination (represented by a row).
is it possible that posthuman civilsation is simply impossible? what if we dont go extinct but we also dont become a posthuman civilisation? can we really assume that we just go extinct before posthuman civilsation?
This issue comes up later in my response to the Simulation Argument. See Slides #48-50.
I’m not sure that the premises in the 4-way argument are mutually exclusive, I think (1′) and (4) cannot logically both be true
Continuing from the (1′) and (4) cannot both logically be true comment, the probability is 8/14 = 4/7
It’s going to turn out that in the four way, there aren’t 16 possible combinations, but only 8. This is because as the new Propositions (1′)-(3′) are defined, they rule out (4) and vice versa. So instead of 4/7 as the odds that we are already living in a simulation, the 4-way gives us 4/8, i.e., 0.5. Which is also ironic–since it’s equal to “we are simulations” vs. “we aren’t simulations”–two mutually exclusive and jointly exhaustive options, so, 0.5 probability for each…
How would one know if simulated consciousness actually exist? It seems we can only rely on evidence provided by the physical substrate, but a DAT would say that there is an inherently subjective mental state that cannot be accessed/known to exist.
Similarly, how would a physicalist make a definitve conclusion that a certain physical property of a computer is the source of consciousness/mental states? How can one really actually measure whether an object has the property of being conscious?
You are asking all the right questions! The more you take Nagel’s ideas seriously, the less you are impressed by the claims made by people who think simulated consciousness is possible. That is, you don’t quite have an argument for showing that it’s impossible, but you would be wary of claims to the effect that consciousness has been simulated–you would wonder if we are justified to believe them. As you recall from last week, we have less of a problem in believing that other humans and animals are conscious, partly because of the analogy of both behavior and internal (neuro-physical) structure. No such luck when confronted with a simulated being. And I need to emphasize that last part–for Bostrom’s argument to have bite, it’s not enough for us to believe that we can artificially create consciousness. In an entirely mundane sense, we already do something similar by “having babies”. And presumably science might progress to the point where we can grow test-tube or vat babies, etc. But in such cases, we are still counting on “wet-ware” to be the physical basis of the consciousness, not just programs running on a computer.
is the proposition that we are living in a simulation then unfalsifiable…
this topic is very sus
The proposition that this particular electron in the setup will go this direction is also unfalsifiable… The idea that 50% of the electrons in the set up will go that direction, and the other 50% will go the other direction, however, is well established by the math and the experimental results… Also, the idea that we are simulations (or not) is unfalsifiable to us. But plenty of things that we consider scientifically well established were unfalsifiable to our ancestors or beings without access to our science and equipment–but why should that matter? And the doctrine of falsifiability–is it itself falsifiable? There’s more to the matter than meets the eye…
Prof Loy should use the anime girl converter facecam
Actually, if any of you know of a reliable anime converter, let me know. I’ve not actually found any that is looks good. I have, however, paid for a proper “Futurama” style portrait…
Considering how many people find 4th wall breaks in movies/books/shows amusing, I can’t imagine how hard the administrators of this simulation are laughing at us and our puny minds grappling with this topic.
Yeah… 😀
Is isekai anime then a simulation of a world on its own? Since the characters are transported to another world, are they now in a simulated world or were they already in a simulated world?
I think that in the typical Isekai (or the related 穿越) scenario, both worlds are meant to be real, not simulated (i.e., as far as the story goes). But of course, if all this is already a simulation, it’s actually ‘easier’ to implement Isekai scenarios–by moving one conscious agent to a different VM running a different virtual world, let’s say.
What does jointly exhaustive mean?
i am jointly exhausted from get1029 ;-;
If a bunch of alternatives on an issue are “jointly exhaustive”, it just means that they are all there are–there are no other alternatives (regarding that issue). The alternatives have–together (“jointly”)–exhausted the possibility space.
Yeah, that makes two of us. But I will keep going until all of you are safely done with the module!
* * * * *
what’s the difference between virtual life vs simulated life, and substrate independence vs brain-computer assumption?
I was using “Virtual Life” to refer to a conscious life lived in a virtual reality (Slide #14). But there’s no commitment as to the nature of this consciousness–whether it’s the consciousness of a “real” or a ”sim”. We–let’s say that we are flesh and blood beings–can already lead virtual lives today, as avatars in some virtual reality. A Simulated Life, on the other hand, is the virtual life led by a being with simulated consciousness (artificial Consciousness generated using computer hardware and software). It’s the virtual life that a conscious NPC leads in virtual reality.
For Substrate Independence vs. Brain-Computer Assumption–(Slide #19)–think of the latter as a more specific version of the former. Note that this is a distinction that I (Prof Loy) am introducing. It’s based on something in Bostrom, but not made explicitly in Bostrom. Recall this passage from Bostrom’s paper (p. 2):
The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon‐based biological neural networks inside a cranium: silicon‐based processors inside a computer could in principle do the trick as well.
Substrate Independence = the red text. Brain-Computer Assumption = the blue text. In other words, for our purposes,
Substrate Independence = consciousness can be implemented on a physical base other than carbon-based biological neural networks, i.e., a brain (“wetware”).
Brain-Computer Assumption = consciousness can be implemented on computer hardware and software, i.e., by implementing the computational activity of the brain.
So the Brain-Computer Assumption will imply Substrate Independence but not the other way round. Bostrom also has this bit that expands on what I called the Brain-Computer Assumption further (still on p. 2):
…just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine‐grained detail, such as on the level of
individual synapses.
If the above is true then, of course, Substrate Independence is also true. But if all you know is that Substrate Independence is true, you can’t deduce that the Brain-Computer Assumption is also true–after all, it might be the case that computer hardware and software aren’t suitable alternate physical bases for consciousness but other things are. The thing behind what I called the Brain-Computer Assumption is the idea that consciousness supervenes on the computational activities of the brain in such a way that once you have implemented the same computational activity on the alternate hardware, you’ve got your consciousness generated. But presumably, someone can believe in a form of Substrate Independence without also believing this more specific claim.
A further thought. In the Webinar, I mentioned that Reductive Physicalism isn’t really required for the possibility of simulated consciousness, per se. Yes, that seems strange. Think also in terms of what’s minimally needed for a sim to be created–What the simulators need is a reliable method for generating consciousness on a particular kind of alternate hardware. At least as far as the terms have been defined so far, it’s not as if the possibility of such a method is incompatible with dualism. Remember that human beings are already in the regular business of generating conscious things–by making babies! Presumably, the possibility of such a method is compatible with all three theories.
What’s minimally needed is not that we fully understand how it can be so, but that it can be so. That is, that intelligent beings can make something happen–whether or not anyone understands how or why it happens. If we ask ourselves–could the ancient (human) civilizations on the Eurasian continent forge bronze? The answer is–of course they could. Did they understand why and how what they do work? Unlikely. Now–which one does it take for there to be sims? That something is feasible (whether or not anyone understands how it can be so), or that people understand how it can be so? But obviously, we would tend to be more confident that something is feasible if we understand how it can be feasible. But this is about whether we are confident that something is feasible; for the Simulation Argument, we are talking about what’s minimally required for it to be feasible to create sims.