Was somewhat tied up over the weekend so was unable to get to this. I’ve answered up till just before the “common objections” part.

The first ones below, mainly on the demandingness of objection.

Why do you feel that the responses to ‘personal interest’ and ‘integrity of character’ are different?

Only because, conceivably, they don’t have to go together–as long as it is at least conceivable that, sometimes, upholding integrity of character requires paying a personal the cost, and conversely, pursuing what’s in one’s interest implies setting aside integrity of character. But of course it’s also possible that they go together, perhaps even perfectly. Since I’ve not settled the issue, it’s better to keep them distinct for now.

Would robin hood be considered a utilitarian

Only if what he does brings about the best outcomes for the world–the greatest happiness of the greatest number.

Singer mentioned that we turn a blind eye to consequences. However, can utilitarians also motivated by guilt more than consequences? Taking into account the drowning child, will a utilitarian consider the happiness of the child over their own?

By all means enter all that into your utility calculation–the unhappiness created by your not going in to save the drowning child should include such factors as your feeling of guilt. Likewise, the happiness created by your going in to save the drowning child should include the joy you feel in doing good. Everything that relates to the world’s happiness–and you are part of the world–need to be included.

is Singer’s response question-begging though?

The summary response to the demandingness objection? It’s not straightforwardly question-begging since it doesn’t assume the falsity of the opponent’s position. Rather, it like a sort of “outflanking maneuver”. Rather than argue that the opponent’s position is false, he argues that, actually, the opponent doesn’t really believe in her own position. If she were to look deep in her own heart, she would realize that she has really been on his side all along. Yes, sort of like this. If the point of the argument is to rationally persuade your opponent to take up a certain position, this is in the category of sick kungfu rather than question-begging.

could you elaborate more on how a preference utilitarian would consider preferences and interests in making a decision?

Not for this class… If you are really curious, you can check out the interview where Singer explains why he abandoned Preference Utilitarianism for Hedonic Utilitarianism. Just note that it’s strictly extra stuff for the purposes of the class.

Is there an example of a moral agent that is not a moral patient?

I’m adding this one to the Q/A for W04 (scroll to the end).

Moving on to the first consideration of the Drowning Child argument–

Wouldn’t not saving the child be a utilitarian decision and praiseworthy because Darwinism calls for weeding out the weak? For a child to end up in that scenario and drown in shallow water, hence we will be doing future generations a favour.

Are you assuming that this sort of “Darwinism” is compatible with Utilitarianism? Always go back to the definition. If, by ignoring the child, you bring about the greatest happiness of the greatest number, then the morally correct thing to do according to Utilitarianism is to ignore the child. You need to show us the math for the calculation though. More importantly, note that at this point in the dialectic, Singer isn’t straightforwardly invoking Utilitarianism. He’s just asking you–do you sincerely believe that you don’t have an obligation to save the child? (As you recall, most of the class seem agree with Singer on the truth of that premise though.)

would his argument be stronger if he has another argument for the people starving? his current argument might encourage too much on extrinsic motivation for climbing the social ladder, where I feel such should be intrinsic in individuals

I’m not sure what you mean by this? He’s counting on you to judge that you do have an obligation to save the child. We can flesh out the scenario such that no one else is looking, and if necessary, add on enough details so that you can’t get serious social capital by having people know and praise you for saving the child. Under those conditions, do you still think that you have an obligation to save the child? If so, he has a strong starting point. All he needs to do next is to show that the situation of the starving child in the poor country is morally analogous.

whats the analogy with the puppy argument?

I have a blog post planned for after W06–because the argument for that topic has a similar structure. Look out for it. (And the ancient Mohists will appear too–they discussed this type of argumentative strategy more than two thousand years ago…)

it is supererogatory even if you donate below marginal utility? So the people you help and you yourself are also happy?

How could/would supererogatory exist in utilitarianism? Or is it not possible to exist at all?

if someone gives to not just the point of marginal utility, but even more than that, will singer consider that person doing something supererogatory?

So is the integral conflict between the Drowning Child argument and the Demandingness Objection is whether some actions can be supererogatory? (ie if we can find a space for supererogatory, we are not obligated to give money to poor countries)

There’s a local issue and a global issue. The local issue is–Singer wants to argue that we do have a duty to give more to help the suffering in poor countries, even though we currently think that this is supererogatory. There’s a further–global–issue of whether Utilitarianism can accommodate superergation at all. We shouldn’t to confuse them since Singer isn’t engaging with the global issue.

Exactly how Utilitarianism can accommodate the supererogatory, and whether it should even bother–is a whole discussion of its own. The short version is that the bare-bones version of Utilitarianism we have been considering has no space for supererogation. You will need to add more to the theory, or modify something, to accommodate acts that are not within our duty and yet praiseworthy. One student suggested the following idea in the chat–imagine that you have the option between two courses of action which are equivalent in terms of the utility they produce for the world. But one requires a greater sacrifice from you personally while the other doesn’t. Plausibly, doing that is not morally required but “morally good”. This is fair–but do note that you need some additional machinery to explain that extra idea since it’s not captured by what’s in our bare-bones definition of Utilitarianism.

Continuing…

Is Causal Disconnect and Causal Impotence similar or different?

is causal disconnect caused by a knowledge gap? whats the diff between objection 1 and 2?

would it be right to think of casual impotence to be – even if I do something, the effects that come out of it are not SIGNIFICANT enough to solve the problem to will me to do something? while casual disconnect would be – even if I don’t do something, nothing will happen?

Different. The names are merely there for convenience; being able to tell the difference between two lines of thinking is more important.

  • Objection from Causal Disconnect = If I don’t jump into the pool, a specific child will die as a result; but no specific child will die as a result of my failing to donate (even though, conceivably, there will be dying children).
  • Objection from Causal Impotence = Even if I wade in, the drowning child will be saved; but even if if I donate everything I have, I will make almost no difference to the situation with the starving children in the famine.

The first one questions whether my inaction is causally connected with actual suffering. The second one questions whether my action can make a difference. Neither are directly about knowledge though…

Is the objection of Casual Disconnect a subset issue of the objection about Knowledge/Distance?

Can I say that we are not blameworthy if I cannot forsee a direct consequence? aka giving money has no direct consequence but saving a drowning child does

Rather, the reply to Causal Disconnect is to point out that–it’s not true that no particular kid will die because you didn’t donate. At best, you can say that you don’t know which particular kid will die. But a moment’s reflection will tell you that there is a direct consequence–at some point in that queue hiving out food, the supplies run out, and the next person starves… The fact that you don’t know who that person is doesn’t mean he doesn’t exist.

how would you (or Singer) define marginal utility?

It’s in the reading, p. 241. Also on Slide #26.

We can always donate two care packs instead of one. But because we choose to donate one care pack a second person would die. So regardless of our actions people will die, so where do we draw the line? Is it to the point of moral utility?

New objection: what if we see a drowning child every few seconds fot the rest of our lives? Are we allowed to only save some of them?

Singer will say–yes, to the point of marginal utility. Or think of my drowning kids in the pool analogy. I can always go in to a second time to save a second child. But because I chose to go in only once, etc., etc…

Isn’t this analogy (having multiple kids drowning) incomplete? Isn’t it more accurate to state that you only have the strength to maybe save two kids. With this caveat, are you now blameworthy for not saving the fourth kid onwards?

Ah, but if you reach that far–because that’s all the strength you have–then you are doing what you can to the point of marginal utility. Singer will say you have done well.

For the canal analogy, if there are many kids drowning, the logical thing to do would be to call for help. Hence, can this relate to how we can encourage everyone else to donate rather than doing everything alone? Then you can make a diff

But think back to the scenario–it’s slightly out of the way. You could have save the child at the cost of ruining your clothes and shoes. The child might die before further help arrives. Also, more generally–do both, Singer will surely say!

for the drowning child, i can be sure that my action makes a difference but for the starving child, how do i know if my action actually make an impact? (e.g donations may not go directly to the recipients)

Money is not always the solution How do you think Singer would reply to this? Would he feel that his argument is too narrow and probably that is why it is not widely accepted?

That’s why you donate to reputable relief organizations, follow the work of https://www.effectivealtruism.org/ etc. No, money isn’t the only thing that is needed. But seriously, the starving kids need food, which have to be purchased by money. We aren’t talking about things for which money is not even part of the solution at all.

Can we blame ourselves for not having knowledge and thus not giving our resources, especially if some charities are not transparent about their efforts?

But now you know…

What does being collectively blameworthy yet not individually blameworthy mean?

Think back to the analogy (Slide #22 especially). Are the three merely “collectively blameworthy” for not doing their utmost to save as many as they can? Also, considering that the population of the richer part of the world is a much smaller proportion compared to the poorest part of the world, the collective responsibility can still imply a big share for each other us…

Is what you do morally supererogatory if you go in and somehow manage to save 10 children as one person? Through sheer determination lol

If that goes beyond marginal utility, you are actually doing something that brings about a net loss of happiness to the world… so, at least the Utilitarians won’t be praising you.

Is a valid alternative solution: Teach all children how to swim. Therefore, we won’t be frequently confronted with having to save drowning kids. It’s just like the scenario where countries starve because of war and population

That’s not an alternative–it’s a further call to action. When someone is drowning right before you and you can save them, save them first, then worry about teaching them to swim.

Hi Prof, may I ask why are we discussing Peter Singer’s argument in the lens that it’s an utilitarian argument? A deontologist and a consequentialist can agree on both premises right?

It’s a bit of both though. The argument can be taken in a neutral way in the abstract. But as I also mentioned in the lecture, Singer does want to say that our support for Premise 1 indicates our unspoken support for Utilitarianism. Also, as far as possible, the “common objections” are not replied to with any reference to Utilitarianism in the Webinar itself. The ‘Friedman reply’, on the other hand, leverages the Utilitarianism in the background of Singer’s argument. Here’s another way to think about it. In principle, one can accept a version of the Drowning Child Argument that is–like the Puppy Argument–not dependent upon Utilitarianism at all. But like the way Norcross talks about his Puppy Argument, Singer does draw a closer connection between his version of the Drowning Child Argument to Utilitarianism (see Slides #24-26). Singer’s Strong Principle is basically a Utilitarian principle.

what would singer reply if someone believes that saving the child is always supererogatory because they aren’t a utilitarian and simply believe that they aren’t bound by any pre existing obligation or responsibility towards the child

He does see himself as targeting people who can at least agree that we do have a duty to save the drowning child… (Think back to the Puppy Argument.)

What exactly can constitute as moral significance? Is there an assumption that clothes and cars are superficial and cannot be morally significant?

Singer doesn’t need such a strong assumption. He’s only asking you to think to yourself–my clothes vs. a human life. My car vs. a human life, etc., so, which one is more significant, morally speaking?

Should we sacrifice one of our hand to save another persons life

I think that on the Weak Principle, it won’t be hard to make a case that doing so will be to sacrifice something morally significant, and so, not a duty.

Was that a One Piece reference?

That’s unlikely, since that not one of the anime shows that my girls and I watch…

The weakly inaccessible cardinal is not used in such a manner.

No. The point here is just this. If you say that a human life has infinite moral worth, then, you can’t also say that two human lives are worth more than one human life (twice as much)–since infinity + infinity is just infinity. This is a serious problem for most versions of Utilitarianism–because given the above, you can’t even say that the five lives on the track are worth more than the one life in the trolley scenario. In principle, I suppose one can try to make use of some fancy mathematics to overcome the problem, appealing to different orders of infinity. But until the Utilitarian figures out a compelling way to use that kind of fancy math, they have a problem on their hands if they want to say that a human life has infinite worth.

Isn’t it a very capitalist perspective to say that income generates happiness for the world? What if the child in the poor country won’t generate much income but can still bring happiness through being a nurse/counsellor, jobs that don’t pay high?

Isn’t it problematic to base calculations of happiness on income? What if a person is a billionaire but their earnings derive from exploitation and abuse of workers? Wouldn’t their income derive not from making people happy but rather the opposite?

are there any possible metrics, other than lifetime expected income, to generate overall greater happiness in the world? it seems vividly subjective considering social economic statuses are only one out of the many factors that could contribute.

Ah–watch the thing carefully. See Slide #33. The point behind the Friedman Reply isn’t that income “generates” happiness, but that average income is an indication of net happiness contribution. (The marks on a ruler indicates height; they don’t generate your height!) Even more modestly, the point is that if you have two individuals from two populations with a large difference in average lifetime income, then on average, they have different net contributions to world happiness indicated by the different average lifetime incomes–because, as it were, the world ‘pays’ them differently because their make a different contribution to world happiness.

Let’s say there are 3 children (of equal value). 2 are strangers, 1 is Singer’s child. According to Singer himself, with his ‘strong’ position, would he say he’s blameworthy if he were to save his child and let the 2 children die?(he can’t save all)

Given what we have learned so far, and granted his Utilitarianism, yes.

since expected lifetime income is a proxy measure (meaning not perfect but best option), how can outcome utilitarians remain confident that they will be able to calculate a value of expected outcome that proves Singer wrong?

We are waiting for Singer and co. to provide a better measure… Until then, this does seem like as good a way to go as any.

Re slide 35, is this the kind of calculations governments use to formulate policies? eg. tuition grant

Similar calculations are in play whenever you buy insurance. How do you think insurance companies calculation how much premiums you need to pay to buy their products?

Hmm, wouldn’t the money spent also go to the tailor etc. which would bring happiness elsewhere as the money circulates throughout the economy? A cost to me specifically doesn’t necessarily translate to a cost for the world as a whole?

It does actually. Suppose the cost is very small–a little muscle ache. That affects the work you do. Which affects… your company’s profit line, by just that very little bit, etc. Or the cost is big–replacing your ruined shoes, etc., etc.

But can’t the same amount of money buy different amounts of happiness in different places?

That’s why we use International Dollars, and Purchasing Power Parity, and such like.

Can we say that even though Norcross and Singer have kind intentions by writing what they write, their arguments can still be torn apart with cold, hard logic, like from Lomasky and Friedman?

I think both Norcross and Singer would take offense at the implication that they aren’t working from cold, hard logic…

Prof does this mean SInger never buy any tertiary needs or pleasure for himself and give most of his money for charity?

I don’t really know. But all reports are that he is a rather frugal person who does donate a big part of his income towards charity. Whether to the point of marginal utility, I don’t know.

As a deontologist, is having no intention to save others the same as having the intention to harm others?

It will depend on the context though. You can have no intention to do X in the sense that whether or not to do X isn’t even an issue. You were walking down the street and didn’t even see the guy seeking for donations his charity–and that’s one way in which you can have no intentions to donate. But suppose he came up to you and you saw him, etc., and you walked away. Then this lack of intention is a lot more like an intention to not do something. And that kind of “not doing something” can, in principle, be the direct cause of a harm. Imagine that you are the third little pig and when your brothers come frantically knocking on your brick house’s door to escape from the big bad wolf and you had no intention of opening…