Here goes. I have also published several (longish) posts relating to the topic–they were created in response to recurring student queries from previous batches. So do browse them if you have a chance. These include–a taxonomy of the main positions in the so-called “free will and determinism debate”, a post distinguishing causal determinism from two other ideas (foreknowledge, and fatalism) that are often confused with it, and a recap of the Strawson-Hurley debate. I also added a few questions (marked with #) that came from previous years.

Introduction to topic–

Is free will really free?

can believing in free will make us have free will?

If it exists then it is free. This is like asking if squares are really right-angled. Or maybe the student is asking if what we considered “free will” really exists. Ah–then that will first depend on what we mean by “free will”. On the explication I used, an free will = having the ability to do (i.e., will) other than what one did (i.e., willed) (see Slide #8). But as you soon discovered, there the Compatibilists and their opponents don’t interpret that idea in the same way–the first group (those who reject the first part of Premise 2 in the Standard Argument) believes that such an ability is compatible with determinism, the other (those who agree with the same premise), that it isn’t. You just did something. Can (just) believing that you have the ability to do other than what you did make you have that ability? (No.)

Why do only humans have free will? 🙁 Is my dog forced to love me?

is being a moral agent / patient relevant to free will? how are they linked?

It’s actually controversial in the sense that some people do (conceivably) believe that non-human animals have free will (ability to do otherwise). Note, however, that we should be careful about something else here–whether you were “forced” when you don’t have free will is not itself something settled.

Think of something simple you just did just now–apparently without being forced. For instance, you picked a certain shirt to wear.  Something where you didn’t feel “forced”. Now contrast it something where you really did feel “forced”. Your friends pressuring to join them in some hijinks when you would rather read philosophy, or whatever. If Causal Determinism is true, both events couldn’t have been otherwise given the past and the laws of nature–you don’t have free will (assuming Premise 2 of the Standard Argument is true). But the difference will still be there. Long and short–even if, given Determinism, we don’t have free will, it doesn’t follow that all our actions and choices are forced. Some may be, and some may be not.

Back to the dog–don’t forget that we are only talking about some of the necessary conditions for moral responsibility here. Presumably, your dog is at best a moral patient, not a moral agent. Being capable of being morally responsible is, presumably, a necessary condition for moral agency.

Is the naive theory implying having free will, but it’s naive because it doesn’t explicitly mention free will?

The Naïve Theory is on Slide #7–note that it does not mention free will at all. The “choice” mentioned in it is open to interpretation as to whether it involves free will at all.

So free will and rationality opposes determinism?

If you agree with Premise 2 of the Standard Argument, you can’t both have free will, and have your actions and choices be deterministic as well. Rationality isn’t really part of this discussion–it’s a separate constraint on moral agency.

I don’t have free will because I raised my hand when you told me to, even though I could have just not done it XD

If you really could have not done it, then you have fulfilled the necessary condition for free will. An exercise of free will doesn’t have to be completely unconnected with stimuli, e.g., being asked by others to do things and you responding. And that stimuli could be internal as well, e.g., wanting to eat an ice cream. In other words, typical things we actually do. If my choice is free only if it is completely unconnected with our desires, beliefs, values, habits, and our reception of external stimuli, then why is that choice even considered “mine” at all? It might as well be a completely random happening in the universe! (Hence the motivation for Premise 3 of the Standard Argument.)

could you have chosen something other than what you chose’ is not the same as ‘was any other action acceptable’, right?

Absolutely. We are talking about whether you have the ability to do something (other than what you did), not whether the thing is acceptable or permissible, good, bad, etc.

What are all the conditions for this free will doctrine? So does the free will doctrine define what free will is?

The whole of what I called the Free Will Doctrine (or Principle of Alternate Possibility) is stated on Slide #10. It’s not a definition of free will, by the way. It states that free will (ability to do otherwise) is a necessary condition for moral responsibility.

# In the lecture, you mentioned the contrast between pragmatic reasons to praise or censure someone vs. and reasons that have to do with someone deserving that treatment. Is there any way these 2 ideas can overlap? Or they are mutually exclusive?

Sure, under the right conditions, the same person can both deserve the treatment, and we also have good pragmatic reason to treat him in the relevant way. But this isn’t an “overlap” in the concepts though. Rather, it’s just that sometimes, both concepts can apply to the same thing. By the same token, they aren’t mutually exclusive concepts since they can both apply to the same thing.

Determinism and Standard Argument–

So everything happens for a reason, and if you dont believe in that, then you are not morally responsible?

Is the future then already determined?

I believe the above is regarding determinism? Yes, if Causal Determinism is true of the universe, then, everything that happens is fixed by the past and the laws of nature. Assuming that both the past and the laws of nature are fixed (unchangable), then, the course of the universe–into the future–is thoroughly fixed. Incidentally, don’t jump from all this to saying that everything that happens happens “for a reason”–that’s actually not the same idea, even if related. (Because not every explanation or reason is a causal one; we used to have to talk about this for a topic that’s now decommissioned.)

If what happens next is determined by the slice of what has already happened, and we can divide these slices of time into infinitely small slices, does this basically mean that we are finding dy/dx?

Depends on what your dy/dx is meant to be the analogy of. Taken literally, that’s the rate of change. Whether the universe is deterministic or not, there will be a “rate of change”, right?

Should there be a moral truth, a person w free will would rationally use biological and past experiences to make his decisions. When this person inevitably make a decision, wouldn’t choices tend to always be causally determined then? Contradiction?

Where’s the contradiction? Is the scenario here that the universe is deterministic or not? And does this imply that people don’t have free will? If your answer is “Yes” and “No”, then, even if moral truths exist (e.g., Singer’s Principle, let’s say) and people can behave rationally, you shouldn’t say that they are exercising free will when rationally past experience to make decisions–since all that rational decision making is causally determined. If your answer is “Yes” and “No” (siding with the Compatibilists–those who reject the first part of Premise 2 of the Standard Argument), then you can say that the person is exercising free will, etc.

but what if the very act of knowing the future causes a modification in the said future? 🤔

If, in knowing the future–now–you fix the future–now, before the future arrives–you have the kernel of the (apparent) problem posed by foreknowledge.

Is modern semi-compatibilism saying that our moral responsibility comes from choosing an action/inaction?

Depends on what you mean by that “comes from”–the typical semi-compatibilist accepts the naïve theory… (but has more to say, of course). The more important idea is that for her, free will isn’t a necessary condition for moral responsibility (rejecting the Principle of Alternate Possibilities, i.e., Free Will Doctrine).

# Please explain why if P and Q lead to the same conclusion, R, is a dilemma?? Is the dilemma we don’t know if R is caused by P or Q? If not, where is the dilemma?

The “dilemma” in the name comes about from Premise 1 (“Either P or Q”)–the argument works because there are two (“di-“) propositions (the “lemma”; P, Q) that ultimate lead to the same conclusion. It also trades on the sense of the term which means something like “an undesirable choice between two equally bad options”.

# How would I resolve the dilemma that I come to have the moral responsibility only by performing an action but I perform an action only if I have the moral responsibility?

The first branch is the straightforward–I am morally responsible for doing X only if I did X (among other things). But I’m not getting the second branch. What does it mean that “I perform an action only if I have the moral responsibility”? Surely it can’t mean–I did X only if I am morally responsible for doing X, since that’s patently false–sometimes, “I did X” is true even though “I am morally responsible for doing X is false”. By the way, no dilemma is generated even if, somehow, you managed to say both P only if Q, and Q only if P–if you look up the “Short Lesson”, you will see that the two together entails P if and only if Q, that’s all.

Strawson’s Basic Argument and the “Hurley Response”

Can we say that Franz is morally responsible? Maybe if he Nazi-proofed his house to prevent the invasion of Gertrude and made sure of any weird feelings inside his head, he wouldn’t have been implanted.

If he was not implanted, then we are dealing with a completely different scenario than Implant. So why is this relevant?

Does the basic argument work if out actions are indeterministic?

If our actions are indetermistic is there anything behind the actions?

Very good. Technically, no. If our actions (or choices) are totally indeterministic–there’s nothing to the way we are behind the action (or choices), then the Basic Argument doesn’t quite work. However, this is a very high price to pay to escape the clutches of the Basic Argument–you are now confronted with Premise 3 of the Standard Argument in full force. It’s going to be very hard to tell your actions (or choices) apart from objectively random happenings. Why are they even consider “our” actions at all?

I think his premise 2 is problematic – couldn’t we argue that we are morally responsible for the actions we take now, but not the way we were before.

The thought is precisely that you can’t be responsible for your action now, if you weren’t also responsible for the way you are behind it. And at least for the first (few) steps, there’s actually a lot of plausibility to it–just think, can you be morally responsible for your action if there isn’t also a choice or decision behind it–such that you did the action because of that choice or decision? If so, then how are you supposed to be morally responsible for the action if you weren’t also morally responsible for the choice or decision that led to your action? –The Implant scenario presents a case where intuitively, we want to say that the Implanted Franz isn’t morally responsible. If so, what’s our counter-proposal to Strawson’s Principle?

Is it safe to say that the Standard and Basic argument both states that we cannot be morally responsible for our actions?

Technically, only in the Basic Argument is the conclusion that we “cannot” be morally responsible in the relevant sense of the idea being a conceptual impossibility.

can asserting that “moral responsibility is an impossibility” be an action that has right or wrong ie. there’s a moral truth to it?

Separate issues. In a fuller overall theory, they will probably end up affecting each other. But for now, we are considering moral responsibility on its own–holding fix the idea that there are some moral truths.

hi prof, can you explain 1b on hypothetical choices again?

how can we make a hypothetical choice that was way before our existence?

can an hypothetical choice become an actual choice without any infinite regress?

Actual choice = you actually, really, made the choice. Hypothetical choice = you would have made the choice, if a certain condition had held. (Compare actual consent vs. hypothetical consent in W06 Slide #34-35.) You can totally make a hypothetical choice even if you weren’t–cannot be there–to make that choice. (Think of the grandma example in W06 Slide #35.) No, hypothetical choices are hypothetical choices–they don’t “become” actual choices. But it can both be true that you actually chose to do X, and it is also the case that you would have chosen to do X if you could. In this case, there’s both actual and hypothetical choice.