he In W03 Right and Wrong, I mentioned that to resolve the issue about knowledge posed by the Tandey case, some Consequentialists prefer to talk in terms of the expected outcome of an action, rather than actual outcome. In this post, I will expand on this idea. But before that, let’s make sure we are on the same page.

The basic version of Consequentialism introduced in W03 says that the moral status of an action is all about the value of its outcome for the world; we have only one duty—to act so as to bring about the best overall outcome for the world. A couple of points to note. First, this outcome (or “consequence”) includes everything that the action brings about, which, incidentally, includes the action itself. Second, Consequentialists are interested in the value of the outcome for the world, i.e., in other words, whether this outcome is good or bad for everyone (or everything within the relevant domain). Since it’s “everyone”, the agent himself is, of course, included too; the point here is just that the good outcome can’t just be good for the agent or those he cares about, but in some sense good “for everyone”.

Finally, the one duty that this rather basic version of Consequentialism holds, is that we are to act so as to bring about the best overall outcome for the world. (We will leave it to the experts to debate whether this should be taken as “greatest total amount of good”, or “greatest average amount of good” or anything else, for now, but you get the general idea.) Let’s say that some agent A did action X. For A to have done the morally right thing is for it to be such that the outcome of A’s doing X is the best among alternatives. If there are several alternatives that are all equally “best”, we can also say– “…such that the outcome of A’s doing X is at least as good as the outcome of any other alternative”. But let’s keep things simple. So far so good.

But some of you have already noticed, there’s something somewhat dissatisfying about the above when those outcomes are taken as actual outcomes–everything actually brought about by the action. If this is the case, then there is a sense in which we typically cannot know for sure, before those actual outcomes have occurred, if the action we want to undertake is morally right. Wouldn’t this suggest that there’s something wrong with Consequentialism as a moral theory that promises to tell us which actions are morally right, and which are morally wrong?

At one level, the above isn’t an objection to Consequentialism. Everything depends on what the theory promises to do. This is because there are two ways to understand the theory’s promise to tell us which actions are the morally right or wrong ones. The promise could mean that the theory will deliver a definition of the actions that are right/wrong. Or it could mean that it will deliver a way by which we can know (i.e., in advance) whether a proposed action is right/wrong. If the promise of the Consequentialists is only to give us a definition of right action, and not a way by which we can know if an action is right or wrong, then they have already fully delivered. They have given us a definition in light of which actions can be evaluated as morally right or wrong, even if this evaluation has to be done retrospectively.

But some would find the above answer unsatisfying since we often want our moral theories to help us make morally right decisions going forward. We want our theories to be action guiding, and not just action evaluating. Assuming that this is a legitimate requirement of what a moral theory should give us–not all philosophers agree that they are–then actual outcome Consequentialism doesn’t deliver. Expected outcome Consequentialism is a way to keep as much Consequentialism as possible, while meeting those requirements.

One confusion that students often have at this point is that some will think “expected outcome” = “intended outcome”. And that’s wrong. The intended outcome of an action is what the agent wants to bring about by undertaking the action. Her aim, so to speak–on account of which she undertook the action in the first place! The expected outcome of an action, on the other hand, is a prediction about what will happen. The two can coincide, of course. After all, if you wanted the arrow to hit the target, you wouldn’t have shot it there unless you also predicted that shooting it there will get your arrow to the target! But this doesn’t mean they are conceptually the same–one is about what you want, the other is about what you think will happen in the future. Or that they can’t fail to coincide–especially if we are not talking about the agent’s own expectations.

While there are actually different things that might be meant by the idea of expected outcome, what’s relevant for our purposes is something like this:

The expected outcome of an action = everything that an idealized observer of the action with the best information available at that time and place can reasonably predict will be brought about by the action.

Why “idealized observer”? Because the agent may be culpably ignorant of important information that is otherwise available, or willfully ignoring it, or refusing to take relatively cost-less steps to acquire that information, or failing to make reasonable predictions, etc. It seems wrong to let him off the hook for making predictions on that basis.

Let’s use the Tandey example from the lecture to illustrate the difference. It’s 28 September 1918. Tandey was serving with the British forces which had successfully captured the  British capture Marcoing from the Germans. As the Germans forces were in retreat, Tandy spied a wounded German soldier entering his line of fire. As he told the story, he “took aim but couldn’t shoot a wounded man.” As a result, the German soldier retreated to safety. That soldier was alleged to be Adolf Hitler. Now that last part was false–the soldier was almost certainly not Hitler (you can read about the historical stuff here, and here). For let’s pretend that he really was Hitler. Let’s also grant, as we did in the lecture, that had Hitler died in 1918, the subsequent history of the world would have turned out differently, and it would have been a much better world for everyone than the one in real history.

To simplify further, I will assume Utilitarianism as the go-to version of Consequentialism. Given an actual outcome version of Utilitarianism, what Tandey did was not morally right, since the actual outcome of what he did that fateful day in 1918 is not the best among alternatives. The (would be) world in which the German soldier (i.e., Hitler) is shot in 1918 contains much more net happiness than the world in which he wasn’t shot in 1918. But there’s no way for him to know beforehand that what he did was morally wrong.

So let’s turn to the expected outcome version of Utilitarianism. Given this version, what Tandey did would be morally right if (and only if) its expected outcome has the most happiness among all alternatives. Imagine our idealized objective observer of Tandey’s action, an observer with the best information available at that time and place. What can such an observer reasonably predict would result from Tandey’s killing the German soldier, or not doing so? (Here, we are simplifying by assuming that these are the only two possibilities to make the discussion manageable.)

Here’s a plausible scenario. The lone soldier isn’t a very important personnel in the Kaiser’s forces. The German forces had already been defeated at Marcoing and on the retreat. It is extremely unlikely that that lone soldier’s death will materially affect the outcome of the current campaign or war (i.e., World War I). It would thus seem to the observer that, reasonably speaking, the death or sparing of that lone German soldier will not impact the big trends of the world all that much. But if he is shot, there is a high chance that some family out there that will grief over his passing–some net unhappiness.

It’s, of course, possible that he will later become a serial killer or mass murderer, thus causing a big decrease in net happiness–but the odds for that sort of thing is typically extremely low. So all things considered, a case can be made that the expected outcome of sparing the soldier is a little better than the expected outcome of killing him. Reasonably speaking, the expected outcome of sparing the soldier contains a bit more net happiness than the expected outcome of killing him–this course of action has the best expected outcome, even if by only a little compared to the alternative. By the way, I’m not saying that Tandey’s actions will check out as ok just from the switch from actual to expected outcome Utilitarianism. Whether it will check out that way will depend on the best information available at that time and place–about the person that Tandey spared, etc.