The below are my notes for each of the questions submitted beforehand, with some brief expansions. Those actually touched on in the recording are marked “#”. The questions we didn’t get to come after the break (* * * * *). Questions added in chat are marked “%”.

Update: More questions that came in via email added to the end (search for “Questions that came later by email”).

 

Clarification Questions

#1 Hi prof, can you explain more about what is an ‘argument against’ something? I think that some of the students still keep getting tricked by questions that ask whether a character has formulated an argument against another character or a philosopher’s premises.

#3 Consider arguments X and Y. If X is true, Y is false. Does that immediately make X an objection to Y? What if X is a different argument (than Y) with a conclusion denying Y’s conclusion and X doesn’t say anything about the soundness of Y.

The short answer is—you failed to “object” if what you said is compatible with what the other person said. Let’s say that he said “X”, and you say “Y”. The simplest scenario would be where, if Y (what you said) is correct, then X (what he said) is wrong. In the standard case, the “correct”, “wrong” at stake here is just “true”, “false”–but in principle, they could also be “justified”, “unjustified”, and so on.

But suppose X is an argument. Here, note that nothing stops you from objecting against the conclusion of the argument independently of the argument. But if you do object against the argument, then, you need to be saying something which implies that the argument is unsound, at least probably unsound, etc. And to do that, you need to be objecting against one of the premises, or to the validity of the argument.

#2 What is the difference between Modus Ponens, Modus Tollens, Direct Statement and Contrapositive?

It’s all in “A Short Lesson on Arguments and Logic”. If necessary, approach tutor to clarify specific details.

#4 What exactly is a descriptive theory? How is it different from a philosophical theory?

% what is the difference between normative and prescriptive?

The distinction is between a descriptive vs. a prescriptive theory. Then there is a separate issue of a philosophical theory vs. a something else theory. I might have answered the other one in an early Q/A–not making a fuss between “prescriptive” vs. “normative” for our purposes.

% does this mean majority of the metaphysical arguments would mostly have descriptive theories while epistemological would consist of normative theories?

Most of metaphysics is descriptive. And a large part of epistemology is prescriptive. But it doesn’t mean that descriptive conclusions cannot affect the prescriptive theories.

#5 Feeling trapped: do my preferences cause me to beg the question against others? If so, how do I get out of it?

You beg the question against another only if (a) you offer an argument against the other person’s position, and (b) your argument basically assumes that the other person’s position is false. By the way, this doesn’t mean your argument is unsound–it just means that the other person (and anyone else not sharing your assumption) need not be irrational to reject your argument.

We can’t completely avoid begging the question against the other if it turns out that there are fundamental and unbridgeable disagreements. Yes, in one sense, we are facing “made up minds” and so, begging the question against each other. But that might not always be avoidable.

% Which is better, debate or discussion, or is it the same?

% why are debates so restrictive? its not fun

% which is better, debate or duelling?

All good–just be civil to each other. Also, in philosophy, a “debate” doesn’t have to be nasty. Most philosophers are pretty good at saying “your argument is surely unsound” during seminars, and then heading out for a drink together. Or at least most aspire to be.

 

General Philosophy Questions

#6 There are questions that science can answer with experimentations and logic. However, can philosophy ever truly answer questions? Or are they merely the most convincing explanations given current observations?

There are some questions that can (currently) be answered using empirical methods. There are some questions that can’t (currently) be answered using empirical methods—but they are still important questions. Secondly, even when we answer questions using empirical methods, it’s not as if we are completely innocent of philosophical assumptions at the foundations of those methods. There’s a historical background to how these things sort the way they do as well. (Much in the recording.)

% mathematical methods involve formal and pure logic. tbh I think math is more similar to philo than science. and math, even very abstract math, has a lot to bring to science (eg group theory). just like philo

Most disciplines have a philosophical boundary–at the foundations, and at the cutting edge. Doesn’t mean that philosophers who aren’t good at that discipline will always be able to contribute significantly, but the practitioners in those disciplines are basically “doing philosophy” once they start looking at the epistemological and metaphysical foundations of their disciplines, and at the point where their usual empirical methods run out.

#7 When philosophers arrive on a conclusion, do they try to live by it? For example, will Norcross always be a vegan and will Singer actually give until he reaches the level of marginal utility?

(In the recording) 

#8 Who determines the rules of logic (e.g. but not limited to quasi-logical rules) ? How are they determined?

Same as asking “who determines the rules of math”? Philosophy of logic/math beckons…

#9 What are some related philosophy modules to some of the topics we have learnt in this module?

#11 How are the other 2K/3K etc modules different from the exposure module that we are currently doing?

I normally end each topic with some module recommendation.

#10 If I am interested, but doing badly in philosophy, should I still major in it? Are other philo modules as hard/harder??

#13 How do I know if I should major in philosophy?

If thinking hard appeals to you, etc. Also if the range of topics appeal to you, rather than just one or two things. The question of aptitude for academic study of philosophy (rather than a hobby) = a certain kind of sharpness in thinking. Try out a 2K module in a topic of your interest–it’s actually possible to get better by practice.

#12 What are some of the job prospects if I major in philosophy?

Not too different from typical FASS graduates. Many take up generalist jobs. Unless you are pursuing an academic career (needing a PhD), there are no jobs that require a philosophy degree (just as there are few jobs that specifically require most other pure academic degrees). Let me do a blog post about some of my recent alumni one of these days.

#14 Any films/books about philosophy to recommend?

Tutors and I are working on a list of recommendations/favorites.

#15 Could you explain more about the concept of Absurdism in existential philosophy? And do you buy it?

Now you know which tutor is into this (see recording) so you know who you can bug…

Personal Questions

#16 How did y’all know that y’all wanted to major in philosophy?

#17 Why did the tutors choose to major in philosophy?

#18 What are the tutors currently doing now? Job, Masters/PhD etc.

#19 What are the tutors’ area of interest in philo?

% prof what is your thesis haha

(See recording…) My HT was on Xunzi philosophical anthropology, my MA was on the idea of “Correcting Names” in the Confucian Analects, and the PhD dissertation was on the moral philosophy of Mozi–so, a lot of Chinese philosophy stuff. But the frame of interest is generalist–logic, epistemology, morality, politics, etc.

#20 Do we need to remember the story of Tess, Lena, Dave, etc for the finals? What if I don’t have the memory space for their background story and context, will it be available?

#21 How tough is the exam going to be, compared to the weekly quizzes that we have? And how do you suggest we revise for it? Are the lecture notes sufficient or do we need to read up on the course blog too?

#22 H
ow would the exam be like? Would we have to join a common zoom call? Or do we just access the document and submit our answers before time is up?

#23 Hi prof, what are the tips you have for us on how we can tackle the finals? (E.g. read question carefully, remember important terms from the lectures, look for any sus things the characters say) We won’t be able to discuss the questions with our peers and it could be very scary for us :c

#26 Even if it is open book the time constraint is really scary. How may we manage more efficiently our time?

#28 Will we have a check list for the points we need to know?

I will say more about the final quiz in W13. A few quick points. The final will (most likely) consist of 30 MCQ/MRQs and 2 SAQs. The MCQ/MRQs will be easier/shorter than the weeklies. The SAQs won’t ask for sophistication, just baseline accuracy–and your own sincere engagement! (So in that sense, they are a bit like the weekly discussion summaries.) I am currently planning the thing with back navigation, though I’ve not settled all the details.

No–please don’t go around memorizing the back stories of Tess, Lena, Dave, Will and Gene. Let alone Abe, Cain, Bern, Prof Lloyd, and the rest of the gang. At least Alex the exchange student from France, the Philosophy Interest Cult, Robert the Hyper-intelligent hamster, and Gatniss and Kale and the rest of the Resistance against the rule of the Guardians of Athenikka didn’t make an appearance this time.

#24 Quite a few philo questions seem unnecessary semantic(i.e.nothing other than another human life is of comparable moral worth) most will take the above statement as an indictment on murder, not an argument on if its wrong to take one life, isit wrong to take 2,3? Any tips on spotting these semantic questions?

#25 Some questions’ answers may vary due to a little adjective or description that is “hidden” inside the sentences. Any tips on how to find them?

#27 Is the word ‘definitely’ in quizzes usually sus?

I’ll probably say more about these in W13, but I think I touched on the issue in the W01 and in some of the earlier podcasts. The point is that it’s the precise logical distinctions that matter. So it’s not just a matter of language–the language reflects the differences in thought. P if Q isn’t the same as P only if Q, just for instance. Part of what it means to do philosophy is exactly to be highly sensitive to such differences. Thinking sharply and precisely sometimes requires being able to process textual information carefully, noticing details that make a logical difference; not all of them turn on just a few words, but they can. If you have to draw diagrams, do it. Sometimes, we add details to help push you along; about those “definitely”–when in fact, the “definitely” is not strictly necessary in some of those cases–we put it there to help you (by making you pause and think harder, double check, etc.).

* * * * *

How do the things that we have learned connect with each other

Remember that the module is built not around a historical narrative, nor is it really pure foundational training, but a series of semi-connected topics. There are many connections in concepts between the topics. For instance:

  • Well-being –> Consequentialism à Factory Farmed Meat, Famine Relief –> LPOE
  • Morality (Consequentialism, deontology) –> Political Authority
  • Morality –> Free Will and Moral Responsibility
  • Preference –> Hypothetical Consent –> Hurley Response
  • (Pleasure) –> Consciousness –> Simulation Argument
  • (Knowledge issues) –> (LPOE) –> Knowledge –> Simulation Argument

A fuller consideration of each topic will likely bring in many concepts that connect with many other topics; the pedagogical choice between a more ‘foundationalist’ vs. a more ‘inductive’ method. Topic neutral logic and argument concepts that are useful everywhere. Think back to the purpose of the module: Survey of topics, showing you the methods and style of modern academic philosophy. What you get: Bragging rights…; exposure to philosophical topics, methods, and style; practice thinking skills fundamental to the discipline of philosophy and useful for other intellectual subjects, the workplace, and life in general… The difference between “learning (a bit of) Kungfu” vs. “learning a bit about Kungfu”. I intend to say more about the above in W13 when we do the recap of all the topics.

% Is PPE philo or majoring in philo harder thonk

% Prof the PPE requirements very high, I scared I can’t major in it :c

I’m planning to conduct a briefing for students interested in the PPE program. Most likely early in the coming semester. More about the thing then. Yes, it’s not meant to be ‘easy’.

 

Topical Questions

How do we decide what is a ‘false belief’ when discussing the pleasure-satisfaction theory? Don’t we only realize the faults of false beliefs in the future based on hindsight (which is also unreliable)? Would this then imply that that there is no practical way to apply the theory? (Faseeh, TW8)

Differentiate between: Something is true false, vs. I know/have reason to believe that something is true/false. In one sense, we are dealing with beliefs all the time; but truth is about what the world is like anyway (whatever we might think about it). But why do we care to have reasonable/rational beliefs, or beliefs based on evidence, etc.? Because truth matters–whether a virus will kill you isn’t a matter of what we believe, but what is really the case. Truth thus sets a target for our beliefs that isn’t just something about our beliefs.

If we wanted to compare how good two people’s lives are in comparison to each other, would we be able to use preference – satisfaction theory for this comparison? If these two people had very different preferences, how could their lives be compared then?

Interpersonal comparisons are hard–both epistemically, but also conceptually. This is an active area of debate. Good survey article for those who are interested.

 Does Hedonism imply that we have a moral responsibility to do whatever we can to maximise the total amount of net pleasure in the world?

No; hedonism is a theory about what is intrinsically good; it doesn’t say anything about what we have a moral duty to pursue; Utilitarianism (for instance) does that.

Is it correct to say that if we are a pure DST/PPT/PST, the intrinsic goodness of well-being stems from satisfying our desires, maximising pleasure and satisfying our preferences respectively, so it does not matter what our desires/ preferences/ what we consider pleasure to be, as long as we achieve it because those are the instrumental good? So we should not let our common sense come into play in deciding because it depends on the individual’s perception?

If that’s what your theory says, then sure. That said, nothing stops you from having secondary theories about what people generally desire, or prefer, or what gives people pleasure; there may be some objective issues involved

if a preference satisfaction theorist says that he prefers to experience pleasure and doesn’t care if he is actually experiencing them in real life or not, then would that make him a pain-pleasure theorist instead?

No; there is still a difference in locating where the intrinsic goodness is–in the preference being satisfied, or in the pleasure. A different thing is underpinning the goodness.

By bringing the lives of animals into the world, we can say that happiness is created and the overall net happiness in the world increases. Hence, would you say that the creation of life could be utilitarian, unless lives experience more misery than happiness?

Yes; that’s part of the Lomasky reply, right?

% what makes lomosky’s objection a good objection then, even though he doesn’t directly attack Norcross’s arg’s soundness?

Remember that there are two parts. The first part is meant to undermine our confidence in one of the premises in the Puppy Argument–if we aren’t sure what the farm animals are feeling, we are less sure that we should consider “puppy being tortured” is an appropriate analogy for “animals living in factory farm”. The second part is a separate argument for a conclusion that is opposed to Norcross’ conclusion.

 Does the utilitarian argument against consuming factory farmed meat fail if farm animals do not have conscious experience?

The Hedonic Utilitarian argument will find it hard to get traction. The Utilitarian will need a way to talk sensibly about happiness that doesn’t require the agent or patient having conscious experience. In principle, preferences might do the trick; but a lot of the bite is removed if animals can’t feel pain…

[Same quiz question about the experience machine…]

Keep in mind that I didn’t present the Experience Machine as an argument against Hedonism. For me, it’s really just a device for testing our intuitions to see where we stand.

Is this form of argumentation slightly off? It looks like question begging and does not really follow the premises of the puppy argument very closely:

“Let me tell you about Norcross’ Puppy Argument–If you shouldn’t torture puppies for cocoamone, then you shouldn’t consume factory-farmed meat! Therefore, you shouldn’t consume factory-farmed meat.”

This is a truncated rendition of the Puppy Argument missing one premise.

 f it costs more to save a Third World life than the life is worth, we have no utilitarian obligation to contribute to efforts to improve the lot of people living in the Third World.” Is this statement true or false?

If it costs more (to the world’s wellbeing) to save a Third World life than the life is worth (to the world’s wellbeing), we have no utilitarian obligation to save that Third World life. The statement given (“we have no utilitarian obligation to contribute to efforts to improve the lot of people living in the Third World”) doesn’t follow though, not without lots of additional bridging statements which are not entailed by Utilitarianism.

Governments implement policies which always affect individuals differently. Many policies lead groups of citizens being marginalised or even exploited. Can political authority ever be true if it is true that a segment of citizens usually get exploited for the benefit of the majority? In such a case, is the truth of consequentialism a sufficient condition to prove that government coercion is morally permissible?

In principle, Political Authority can still exist even if the above is true, right? Keep in mind that the Statist isn’t saying that every government that claims Political Authority actually has Political Authority; she typically has ideas about the additional constraints; for instance, she might say that only governments that act in the general interest of the people have Political Authority, Yes, in principle, some form of Rule Consequentialism might justify Political Authority; but the above still apples.

The implication of the Sam Argument is that governments be subject to the same moral standards as ordinary citizens and this does not mean that government should be abolished. Does this imply that the government should still be conferred authority to impose laws etc. but the authority must be checked by moral standards?

If we take the Sam Argument seriously, then, technically, Political Authority doesn’t exist, period. But even if we don’t accept the Sam Argument, Huemer wants us not to give government a free pass…

What makes an “objectively random occurrence” random? Like how would we know it wasn’t somehow causally deterministic?

Don’t confuse the epistemic issue from the metaphysics issue. Objectively random = as a matter of fact, there is nothing determines (e.g., the past and the laws of nature) whether it will be or not be. Things that look random may not be and things that look deterministic may be random–that’s just a limitation of our knowledge.

is it valid to say that one should not be morally responsible for their actions should they be incapicitated to do so? Sociopaths, psychopaths –> Neural deformities or even the inability to differentiate the right from wrong (for crime and violence). However, seems like these people can acknowledge right from wrong and act in a socially acceptable way.

Will depend on your theory of the necessary and sufficient conditions for moral responsibility. Also, should introduce degrees of responsibility. Talked a bit about this in one of the podcasts.

does God decide what is good/ right? If so, then would it be wrong to say that there is evil in the world if it is created by God?

It’s a debate even among religious thinkers whether–and if so, exactly what it means that–God “decides” what is good/right; e.g., is it a voluntaristic or willful thing, and if so, does it mean individualized commands, or is God’s will only instantiated in general rules. Or maybe morality it built into the structure of a God created the universe and that’s how God ‘decided’. And so on. In principle, it doesn’t follow from the general idea that God decides what is good/right that there is no evil in the world. Historical monotheistic religions, e.g., Judaism, Christianity, Islam, generally grant that evil exists in the world; rather explicitly too; search for the phrase “evil in the sight of God” in the Christian Bible.

Could you go through the concepts behind epistemic norms again? Are there solid definitions for these norms? Thanks!

An epistemic norm = a rule telling you how you ought to believe, which belief is rational, etc. Example: Principle of Indifference. Logical principles imply such norms too. The scientific method involves a whole cluster of such norms.

In our lecture about Zhuangzi, we resolve the problem by adopting permissivism. However, that only leads us to be able to have equally justified opposing beliefs and we are rationally permitted to have these beliefs. How, then, can we acquire truth about the world? If we cannot acquire truth, then some of the very difficult debates (about what is right and wrong, for example) cannot be resolved and people may do not-so-good things and can be justified to do so.

The Zhuangzi thing is more meta–it’s about whether knowledge is even possible given disagreement. Given Permissivism, we show that the (meta level) skeptical worry can be answered. But to acquire knowledge in a specific domain, we need to go beyond the meta considerations and learn from specific disciplines, etc.

I don’t know what I don’t know, can you tell me what I don’t know? :C

By learning widely; talking to people who don’t share your point of view; reading, watching, etc.

 Is it possible to go any further than epistemological knowledge that X is true? We always assume ‘X is true’, or ‘X is the true moral principle’. Regarding the latter, can we ever really know? Or is morality something that we will forever be unsure about?

We assume that mainly for the purposed of making our quizzes manageable and constrained. Also, it’s important to figure out the logic of the positions in themselves, without consideration of the complication introduced by the additional thing–our knowledge.

I was wondering what does being a physical thing with both mental and physical characteristics mean? How is it possible to be a physical thing but also reject reductive physicalism? Does not having a mental aspect that cannot be exposed mean we are not physical? Thanks!

That’s really a problem for the DATs to explain 😀 There are two routes–some characteristics are emergent upon the physical characteristics, and panpsychism. One of these days…

What makes a complete theory of mind other than accounting for consciousness and qualia? What do we need to know about qualia?

Check out the Philosophy of Mind modules. That said, consciousness does dominate the discussion a bit because it’s the “hard problem”.

Must a conscious being have will? As an intelligent being accumulates intelligence, will it become conscious? Don´t consciousness entails intelligence although not the other way around, since being conscious also means that they receive new information such as what is perceived, which could then lead to choice that is chosen by the conscious being?

Shouldn’t assume that consciousness implies either will/desire, or intelligence. In principle, a creature may have one without the other. Or in our case, all three. And they can even work together too–as in your example. But the point still remains that they aren’t the same thing and don’t have to co-exist.

Why is it for mind-body dualism, even though they are separated as 2 different entities, there can still be a causal relationship between both of them? Why can the mind supervene on the physical?

I’m still a little confused about the 2 components of Mind-Body Dependence. What would be an example of a thing A that supervenes on a thing B, but A is not explained by B? Thank you!

Supervenience is just a co-variation relationship. That co-variation could be explained by the relationship of both to a third thing. The time on your smartphone is correlated with the time on my computer–but neither explain the other. One possible theory (there were historical Dualist thinkers who were for this) is that God is the one behind the scenes making sure that mind and body line up (“Occasionalism”).

why do we have to assume that mental characteristics supervene on physical characteristics for substrate independence to be true

That’s Bostrom actually. Personally, I’m not overly convinced myself. But it does make sense—remember that we are talking about what’s feasible for simulators to create sims: they will need a reliable method , which implies a degree of causal control, which will imply noticing a supervenience. Again, think, making babies…

Hi prof, haha no question for you cause I am just lost in general, but in the topic of consciousness, mind-body dualism suggests that the mind/soul body are separate entities, I found this theory interesting as it reminds me of the chinese saying “灵魂出窍”(meaning the soul leaving the body), if only my 灵魂 can 出窍 during the finals that’d be great

But try to keep body and soul together, ok? The “out of body” experience thing isn’t specifically Chinese, of course, though there are Chinese versions of the thing.

Given the implication of being in a simulation, is it feasible to apply something like pascal’s wager in determining if we should live our life as if we are not in a simulation? (i.e. to live our best life despite implications?)

Sure, why not? But couldn’t the wager run replacing “God” with “our simulators”? Bostrom said something about this.

Hi prof, can you explain the partition problem again? I don’t really understand it

Not sure if I want to add more to what’s already in the blog post. though. It’s not meant to be a big part of the module.

If principle of indifference is true, can’t our partitions for everything just be 1) is true, or 2) is false and the probability of everything will just be 0.5 and we don’t need to think so hard about life

Suppose you have 1 red marble and 99 blue marbles in the bag, all the same size, weight, textual, etc., and you take one out without being able to see what color it is. Do you really believe that “it’s 50/50 red” is just as likely as “it’s 1/100 red”? The problem is that some partitions do seem better than others.

What exactly is an ancestor simulation? Is it: (a) All historic figures have been simulated (and we are also in a simulation) or (b) We can create a simulation that allows blood human beings to experience how it is like to be someone of the past (e.g. shakespeare) using an ‘ancestor simulation’?

Primarily (a). But conceivably, one purpose of doing (a) is so that we can do (b).

If we are living in an ancestral simulation, does this mean that our predecessors are our Gods? Do we still have free will in this instance?

Our simulators would be as if our gods. Depends on the programming (whether the simulators programmed us deterministically or non-deterministically.

Is it possible that people who die are people who get close to finding out if we are truly sims, and the sim overlords just want to end it there to prevent the overthrowing of the simulation?

But how did they even “get close to finding out”? As far as I am aware, it’s not possible for us to find out without the simulators revealing themselves.

Could you explain further why Bostrom thinks that posthuman civilizations will include people that are interested in running ancestor simulations? what benefits will they get by running this simulation or why would they want to?

Though the probability may be higher, rather than be simulated as an ‘ancestor simulation’ or as a scientific/theoretical experiment, is it possible that we are simulated in a recreational simulation instead? I.e we’re models in a game. So rather than a species wide project, we are just products in a game. How would we know if we’re just NPCs in a simulation?

Some of us are already interested though we don’t have the means. If our descendants are like us, they would want to as well; I mention a few purposes in the lecture. 

What are some of your personal views on the idea that we are living in a simulation?

I don’t think we are living in a simulation, but it’s definitely a possibility. “Possible” isn’t the same as “likely”

Consider the following theories: Hedonism, Utilitarianism, Dualism, Anarchism, Atheism. What is the maximum number of these theories that you can consistently accept at the same time?

Five.

* * * * *

Questions that came later by email:

One necessary condition for knowledge is for the “belief to be true”. But how can we know if the “belief is really true”? And how can we know if our belief that the belief is true is true?… Is this then a problem of infinite regress & knowledge is logically impossible?

Your confidence that a belief is true just is the level of your justification for the belief–there is no further issue there. Note also that it is entirely possible for one to know something without know that one knows it.

Lomasky objects by saying Norcross is anthromorphizing, but isn’t Lomasky himself anthromorphizing when he says that “FF animals experience more pleasure than pain their lives”?  I take “anthromorphizing” here merely as “humans are not animals and do not know what it is like to be one”. Does this suggest that he’s kind of being hypocritical in his objection (no disrespect to your friend tho) & possibly has a made-up mind? It also sounds a bit like Huizi-Zhuangzi’s debate on knowing if a fish was happy.

Look at Slide #34 carefully again. Don’t forget his observations about how animals behave in the wild where they are not constrained by humans–circumstantial evidence that crowding (for instance) may not be anything the animals especially dislike. That’s why he says “it’s very likely that on average, factory-farmed animals experience more pleasure than pain in their lives (unlike Fred’s puppies).” –so unless there’s fresh evidence, he’s suggestion is at least as good as what Norcross is proposing. Think of him as saying to the third party–there is at least much reason for you to agree with me as you have to agree with him, maybe even a little more. See also my response to #5 above.

I’m confused with slide 25 of LPOE. How does talking about good or evil in terms of “things” help the theist more than talking about them in terms of “ideas”? I thought if you use “things” to talk about them, then it makes more sense that good things can exist w/o bad things existing, which means good things are not necessary counterparts to each other. And this doesn’t help the theist.  But if you talk in terms of “ideas”, then you are proving that they are logical opposites & necessary counterparts to each other? Which helps the theist more?

The theist needs the “evil as necessary counterpart” principle to be about things. Remember that the LPOE is about how “God is omnipotent” and “God is wholly good” supposedly conflicts with “Evil exists”—as in evil things exist, not “the idea of evil exists”. But that principle is plausible only if it is about ideas, so it doesn’t help the theist respond to the LPOE.

X. Therefore, either X or Y.

May I clarify why it’s valid? As long as the conclusion is true, even it’s not logically necessary, and given the premise, the argument is valid is it? I’m thinking that the conclusion is not logically necessary given the premise, but is true because the world has cooperated or sth.

It’s valid because you can’t both have the premise true and the conclusion false without contradiction. (That’s the technical definition of validity; see also here.) Remember that validity is about how the premises relate to the conclusion. It’s not about whether the conclusion is true or logically necessary. We aren’t really covering validity in this module–you need the formal logic module for that.

If Simulated Consciousness is infeasible, is Bostrom’s argument unsound? Actually, why do we need the 4-way argument, if behind Bostrom’s propositions he assumes that SC is feasible. (Ok but he does right?) EG: Assuming that Classical Theistic God is not omnipotent, you don’t have a LPOE problem anymore. IF the antecedent is false, it doesn’t make the whole statement false? –> his 3 propositions still can’t be false all at once.

Bostrom says he assumes that Simulated Consciousness is feasible. In fact, he generally assumes that in his exposition. But technically, his argument doesn’t depend upon Simulated Consciousness being feasible. If Simulated Consciousness is infeasible, then his (1) is true–the proportion of civs that become post human before they go extinct tends towards zero. In fact, it is zero. But note that if (1) is true, then it really does follow that of the three propositions (1), (2) and (3), at least one is true. (See also the previous question.) I introduced the 4-way not to contest this but to point out that (1) can be further split into two possibilities depending on whether Simulated Consciousness is feasible.

Is conclusion being true necessary for an argument to be sound? 

Yes. If an argument is sound, then, it has true premises and is valid. But what happens when an argument has true premises and is valid? It has a true conclusion. So, if an argument is sound, it has a true conclusion. So, if an argument doesn’t have a true conclusion, it isn’t sound (by Contraposition; think: Modus Tollens)  So, having a true conclusion is a necessary condition for an argument to be sound. Also previously mentioned here.

If the conclusion is of a “X is justified” nature (instead of a “X is true/F” nature), then if the conclusion is true, is X then true? If yes then, if an argument is sound, it makes X true & justified at the same time. Another similar qn: do objections against the unsoundness of argument prove that the conclusion is unjustified or untrue?

If the conclusion is “X is justified”, then, it’s true that X is justified. (Which is not the same as “X is true”.)

If an argument is sound, then it has a true conclusion. As mentioned in the earlier blog post The reverse doesn’t follow–you can’t derive “if the conclusion is true, the argument sound, or, if the argument is unsound, the conclusion is false. So you can’t deduce whether the conclusion is true or false just on the information that the argument is unsound.

Can you deduce if the conclusion is unjustified? It will depend on what you mean by “unjustified”. Is it (a) X is unjustified = reason is positively against X, or (b) X is unjustified = there isn’t a positive reason for X? If the argument for X is unsound, then, assuming that the argument was meant to be the reason for X, now the reason has been defeated. So—all things else being equal–X now “lacks the justification we previously thought it enjoyed in the form of the argument” (the (b) sense). But this isn’t the same as saying that X is unjustified, as in reason is against X (the (a) sense). After all, we can’t deduce from the information if there is or isn’t some other reason that supports X, right?

For Principle of Indifference (the example with 1 red marble and 99 marbles):

  • Option 1: 1/100 chance it’s red
  • Option 2: 1/2 chance it’s red

I’m still really bummed out by how option 2 can even be a proper application. I thought the condition that “there’s no evidence about relative likelihoods” is not met? Bc you know that there are 99 blue 1 red. Yes you know you’ll either pick red or blue but you also do know how many there are for each colour. Any mathematician who has learnt the topic of probability would think it ridiculous to assign 1/2 lol.

But is it the case that option 1 is not universally accepted yet and many philosophers still believe in option 2?

As I asked above, “Do you really believe that “it’s 50/50 red” is just as likely as “it’s 1/100 red”? The problem is that some partitions do seem better than others.”

I don’t know of any philosophy who accepts Option 2. The philosophers who put Option 2 forward aren’t telling us to accept it. They are just pointing out that if it’s a proper application of the principle, then there’s clearly something wrong with the principle as formulated. The choice then is whether there’s a way to reinforce the principle so that Option 2 is ruled out, or if not, that we should drop the principle altogether, as explained in the earlier post.

By the way, those mathematicians are not disputing the point here. They agree that if there are n mutually exclusive and jointly possibilities and don’t know if any one of the n is more probable, then, the probability of each is 1/n. The dispute isn’t over that—the dispute is over how the “n” for a given probability space is to be determined, and whether there are better and worse ways to determine that “n”.