Consider again the standard Trolley Problem where by switching the rails, you can save five people from certain death by allowing the trolley to crash into one. Supposedly, most people think that switching the rails in the scenario is at least permissible, if not obligatory. Utilitarians have a straightforward way to explain the judgment–the five lives are worth more than the one, five times as much, in fact (assuming that each of the lives involved is worth about the same).
And if the two courses of actions available to the agent are such that the first can bring about more good in the world than the second, the right thing to do is to do the first. All this, of course, presuppose that the comparison between the moral worth of the five lives vs. that of the one life can be made such that a quantitative difference comes out.
Now imagine that each life has an infinite worth–to be precise, a countably infinite worth (see video). If that’s the case, no such comparison will produce a quantitative difference–because five times (countable) infinity equals infinity again. For an explanation of the math, see the video above.
If a human life is infinitely valuable it would have been permissible to sacrifice millions to save one–since 1,000,000 times (countable) infinity is still equals to infinity. This would mean that there is nothing left to choose from, quantity of good for the world wise, between the death of 1,000,000 vs. the death of 1–you end up with the same net outcome. When that happens, it is permissible for the Utilitarian to go with either outcome without doing something wrong. And that’s not an outcome the typical Utilitarian wants for his theory. Therefore, it’s a really bad idea for a Utilitarian to agree that each human life has infinite worth–unless, of course, he has some really fancy math that can tame these results.
(For those who can’t sleep at night: What happens if we somehow use the uncountable infinities (mentioned towards the end of the video)? No, I’m not answering this one. You are on your own.)
(A paper by Nick Bostrom–the author of the piece we are reading for W11–on the issue here. Just keep in mind that it’s way beyond what we are going–and what I’m willing–to talk about for this module.)