The questions pertaining to Norcross and Lomasky will be in the next post. First, some questions concerning arguments:
must premises be before conclusion for the whole thing to be an argument
When we express arguments in words, there are ways to do so putting the conclusion statement in front, in the middle, and at the end–all this is possible as long as you know how to use the usual words we use to mark premises and conclusions, e.g., “because…”, “since” (marking what follows as a premise), “therefore”, “thus” (marking what follows as a conclusion). Nonetheless, by convention, when we formulate an argument explicitly in premise conclusion form, e.g.,
- Premise 1: …
- Premise 2: …
- ——————–
- Conclusion: …
Then you should put the premises on top, and the conclusion below. Some extra fussy people will also insist upon a line in between the premises and conclusion. Other extra fussy people may also demand that you mark out exactly how the statements relate to each other–which can be useful in more complex, multi-move arguments (the above is basically just one simple argument–“single move” from premises to the conclusion). We might see some of that later in the semester.
Can’t the premise (if p then q) be true despite both variables (p and q) being false?
Yes it can be. Suppose someone says that “If Nazi Germany conquered the Soviet Union, then the UK would have surrendered”–both the antecedent and the consequent are false. But plausibly, the entire statement could well be true.
Is the inverse then correct? Where is B is right, then A is right?
I’m pretty sure the above applies to the “one man’s modus ponens is another man’s modus tollens thing” at Slide #24. The first thing recall is the two basic argument forms modus ponens and modus tollens:
Modus Ponens:
- If P then Q
- P
- Therefore, Q
Modus Tollens:
- If P then Q
- Not Q
- Therefore, Not P
Alright. Now let’s say that you have two opponents Peter and Jane. Peter believes that Q is true while Jane believes that Q is not true–they disagree. Peter advances an argument for his position: If P then Q, since P, therefore, Q (modus ponens). Jane, upon considering the argument, realized that she wasn’t in a position to reject “If P then Q”–that premise she actually agrees with. So how? Does this mean that she has to change her mind about Q not being true? Not at all. She is quite intent to maintain that Q is not true. But this just means that she has to reject P–it’s as if she advanced the modus tollens version of the argument. If P then Q, since not Q, therefore, not P.
Are we going to cover deontological arguments against/for factory farming?
Which would be more morally right? eating animals from a farm or children from a school, assuming both have the same nutritional facts and both suffer no matter what and you must eat meat or you will die.
Hi prof. What are your comments on halal method of slaughtering. Being a Muslim myself, the basic idea of halal slaughter is to ensure that the animal does not suffer while it’s dying
No. Though think about it–if you do agree that it’s wrong to eat at Fred’s, could it be that some Deontological intuitions are behind the scenes? I’m pretty sure it’s not morally permissible to eat human children from a school, etc. In ancient days when cities are under siege, there had been stories of how the starving adults ended up eating their own children, or exchange children to eat so that they don’t have to eat their own–but the fact that people are willing to do outrageous things when they are desperate enough doesn’t mean that they actually thought that what they did was morally permissible! I’m not as familiar with the halal procedures for slaughter and I refrain from saying stuff I don’t know. However, note that the matter at hand isn’t really about slaughter per se, but the ‘living conditions’ of the animals in the factory-farms.
How does consequentialism and hedonic utilitarianism go together?
Please review W03 Right and Wrong, around Slide #18–this is a basic definitional thing that was covered in the Webinar and you had better know by now. As we have introduced the terms, Consequentialism is the umbrella theory within which includes specific variants such as Utilitarianism, etc.
Ok, on to the Utilitarian Argument against the consumption of factory-farmed meat.
- Premise 1: If one consumes factory-farmed meat, one supports the creation of more misery than happiness.
- Premise 2: If one supports the creation of more misery than happiness, one is doing something morally wrong.
- Conclusion: If one consumes factory-farmed meat, one is doing something morally wrong (i.e., consuming factory-farmed meat is morally wrong).
The questions:
Am I correct to say that premise 1 is wrong, because even though we consume factory farmed meat, we do not necessarily support creation of more misery than happiness because some of us don’t condone animal cruelty?
No, the “support” here doesn’t mean “approve”, but “make the other thing possible or at least more probable”, as for example, the legs of a table supports the tabletop, i.e., makes it possible for the tabletop not to fall down. Hmm, for reasons that elude me this is actually the first time this potential misunderstanding is coming up. I should ask the tutors to remind everyone.
For premise 1, what if we argue that the misery/happiness of animals does not count on the consequentialist scale?
You will be arguing against Premise 2 instead, not Premise 1.
Is this argument valid against premise 1? “if one limits the consumption of factory farmed meat, misery is also created as meat affordability goes up and people can’t afford meat.”
Strictly speaking, you’ve not made an argument. Whenever you think that you are making an argument, always ask yourself–ok, what’s my conclusion? What are my premises? How am I intending the connection to work? However, I take it that you are asserting something like this: “Restriction access to factory-farmed meat creates an increased misery”. I don’t know if this is true or false, however, it is not clearly incompatible with Premise 1–both Premise 1 and what is asserted can be true at the same time.
It is morally wrong to do things that decrease the world’s overall balance of happiness. But if you have to choose between 2 actions that both decrease this balance, the less bad one is still morally right, right?
Remember that the argument assumes the Utilitarian Perspective. And from that perspective, assuming that these are the only 2 options available, the answer is “yes”. A “less bad” balance of happiness between two scenarios is a better balance of happiness between the two scenarios.
From a hedonistic utilitarian standpoint, if we discover an animal that doesn’t feel pain, or discovers a method that maximises happiness for animals, i.e a virtual simulation machine, then are we morally allowed to factory farm that animal?
If something doesn’t feel pain then, they aren’t moral patients for the Hedonic Utilitarian, so… Likewise the other proposal.
From a hedonic utilitarian perspective, if humans feel much more emotional pain about hurting puppies than hurting chickens, even if this pain is somewhat irrational, does this pain nonetheless factor into the balance calculations?
Absolutely has to be taken into account. Pleasure is pleasure, and likewise, pain is pain–the plain Hedonic Utilitarian doesn’t get to choose. This sort of came up in the well-being topic as well.
If we could find a criteria that is applicable only for humans (regardless of our rational capability or our emotional reactions to things), and not applicable for non-humans, would that criteria be sufficient to object to premise 2?
This is basically the strategy behind both the Rationality Objection and the Ethics of Care Objection.
Can we say that because human are moral agents. their condition worth more compared to the animals. Even if animal farm bring pains to the animals, it bring a lot of benefits to humans who are moral agents
Does the point about rationality, yet again muddle the moral agent/patient divide, by saying that despite carrying the extra moral burden, we have the extra privilege of our art/labour being valued more than any animal’s actions?
Of course you can say that–all of us can say whatever we want. The question is whether this is a defensible position. One immediate problem is that technically, as long as the animals are moral patients, it’s not clear how our being moral agents–with all our rationality, art, philosophy, science, etc.–can just steamroll everything away. It’s not that our rationality doesn’t matter, nor does what Norcross say have to imply that somehow, we aren’t more valuable than animals. All he really needs is the more modest claim that–whatever else, the animals are moral patients, which means that how we treat them morally matters.
could it possibly not be rationality/intelligence but rather the human’s brain? after all, scientists have said that animal brains are SIMILAR, but are never really the same as humans. this can be a way to distinguish humans and animals
I don’t think these are rival proposals as much as complementary ones. Presumably, we can agree that the brain is the physical basis of our rationality and intelligence. But conversely, saying that we have rationality and intelligence isn’t just to say that we have a physical chunk called the brain–we have certain capabilities.
Rabbits are animals which can be considered as pets and factory-farmed animals. Yet people would object to eating pet rabbits but not factory-farmed rabbits. Why is there a difference despite it being the same type of animal?
Then what about if people treat puppies as equal to farmed-animals? They just have different culture and different diet, would that then be considered as treating them badly even if they do not torture the puppies?
Mrs Loy and I lived for a few years in Toronto where rabbits are actually quite easily available from the supermarket–just like chickens and everything else. Nicely skinned and cleaned up, of course. She simply could not bring herself to even go near the section of the deli where the rabbits are… The point here is that your “people” is too indefinite. In principle, there would be people in all four possibilities–object both to eating pet rabbits and factor-farmed rabbits, object to neither, object to one, or the other. So that’s point one. And secondly, who says that people are always consistent? Similarly for the treatment of puppies in some societies. Norcross talks about these differences, as I recall.
just wondering… do animals really not have anything else that makes their lives worth living other than the experience of pleasure/pain? especially compared to humans
Would it be fair that norx uses a objective list to determine that the animals are indeed suffering?
If you are a Hedonic Utilitarian (or Ethical Hedonist), only pleasure and pain matters… so. If you are not a Hedonic Utilitarian (or Ethical Hedonist), then other possibilities are on the table. Pretty sure that given Norcross’ Hedonic Utilitarianism, the animal’s well-being is basically a matter of pleasure/pain.
Why Norcross doesn’t denounce eating meat (with the exception of factory-farmed meat) if he is a consequentialist?
He is a vegetarian, I think. It’s just that he is limiting his target in the paper–sometimes, you do a better job not trying to do everything at once. Also, who says that one has to denounce eating meat just because one is a Consequentialist? Where did that come from?
Why this [puppy] argument is not utilitarian?
Neither of the premises implies Utilitarianism. You don’t need to be a Utilitarian to agree with either premise…
Aren’t both cases dissimilar? Fred actively seeks the suffering of the puppies for the sake of his own fulfilment. Conversely, the suffering of farmed animals is a non-compulsory by-product that can be mitigated.
This is to invoke the Doctrine of Double Effect, and Norcross thinks it doesn’t work. See longer explanation here.
Let’s say that slaughterhouse workers are doing the job that no one wants to do (i.e. killing animals). Since Fred is torturing the puppies by himself, can we say he is on a moral high ground since he is not hiring people to torture the pups?
I don’t know. Is the person who torture you morally better or worse that the person who hired him to torture you?
Hi Prof, could you clarify the example given in the Norcross reading, his one against the marginal cases argument where he talks about the five innocent and five guilty persons.
Hmm… we don’t really intend to do much with that section. But sure, if I can find the time I’ll write something about it.
Isn’t it true that the puppies are treated as badly as factory farmed animals? Cos sometimes the animals get cut to pieces while alive and boiled alive. So I don’t really agree with Santa’s I mean Lomasky’s argument
During the slavery abolishment of 1800s America, there were slaves who fought for slavery to remain, on basis that their safety and livelihood would remain. Today, it is clear that such a viewpoint is absurb. Likewise, do farm animals seem2 benefit?
Lomasky will say that both of you are begging the question–assuming the very thing to be proven. You are, of course, free to disagree with him.
What is the arguments that animals suffer in animal farms, aren’t their needs met, which is more then in the wild. Would this be similar to the naturalistic fallacy, in the sense that if it were bad in the wild, means that it is morally permissible to do no better?
Why does it matter that the animals feel pain in the wild, shouldn’t the question be whether we are causing the pain or not?
Aren’t animals in the wild are technically better off? Factory farmed animals live in confined spaces their whole lives. Geese only feed in crowded spaces temporarily. Bovines in the wild run from predators but humans are predators for factory farmed
This isn’t the point of his argument though. Rather, he is asking us to rethink our confidence that we understand the animals’ psychology–we could just be imagining them using our own psychology as a template, i.e., anthropomorphizing. We might be tempted to think that the animals are suffering when we see their conditions. Until we see their conditions in the wild–they are just doing whatever they do normally, rather than suffering. And if anything, seeing their condition in the wild might even suggest to us that they are having a better time in the farms. Unless you attribute to the animals a desire for “freedom”–and take that to be part of their wellbeing. But again, we are very likely anthropomorphizing. I don’t think we know what is it like to be those animals “from the inside”.
quite confused with the A B C just now, mind clarifying on it again?
This is regarding Slide #35. The basic idea is this. Let’s say that we are comparing the happiness balance between a world in which there is and a world in which there isn’t factory-farming. You might be tempted to think that the difference between the two is just whether the happiness of the farm-animals are counted in–that’s why the box for the human’s happiness in the (A) and (B) world, and the only difference between them is the box for the animals. But what Lomasky wants to say is that this isn’t quite right–the comparison is really between a (B) and a (C) world–with factory-farming, the humans are much happier than they would be without (higher standard of living, many people get to enjoy eating meat, etc.), which led them to create more factory farms, which led to the existence of even more creatures that have, on average, lives worth living.
…then how would we know that our standard of pain/pleasure is similar to that of animals? Aren’t we superimposing our own ideals on to them?
Keep in mind that similar issues have already arose earlier when we talk about Ethical Hedonism. It’s not irresolvable with better neuroscience.
With technology improving, we are better able to quantify the pleasure and pain of animals. Wouldn’t that disable one of Lomasky’s premises if we can prove that factory farmed animals generally experience more pain?
Good–except that the outcome could go either way, right? We could also discover that the animals aren’t experiencing as much pain as we initially though. By the way, I really wish we have such technology so we don’t have to speculate.
If existence in general is miserable, I think non-existence creates greater happiness than existence in a factory farm
If existence in general is miserable, I think we are in much deeper trouble than whatever trouble we are facing with factory farms.
Lomasky, in the end, saying you’re the utilitarian and not me, isn’t that a cheap cop out?
what about Bentham’s statement that Norcross brings out in the arguement. Wouldn’t he use that against Lomansky as a an arguement against the moral suprierority for human beings.
Not at all. In fact, Lomasky is making exactly the strong move here. When you are debating with an opponent, the powerful thing to do is to start from your opponent’s position to show that it leads to your position! Now the other fella really has no escape. Since, if you are right about the way the arguments work, it implies that he can disagree with you only by giving up on his own position.
This is the reason why Lomasky, even though he isn’t a Hedonic Utilitarian, assumes something closeby to that position for the sake of the argument to show that it’s ok to eat factory-farmed meat. But when the hypothetical Norcross points out that given Hedonic Utilitarianism, the problem with the Aliens arises, Lomasky can point out that–waitaminute, I’m not the one who is the Hedonic Utilitarian, you are. For my argument regarding the permissibility of eating factory-farmed meant to work, I only need to assume that pleasure and pain defines the wellbeing of the animals. I’m not obliged to say that applies to human beings as well. So if there’s a problem generated here, it’s a problem for you–since you are the Hedonic Utilitarian–not me.
Lomasky will remind us all that Bentham–like Norcross–is a Hedonic Utilitarian. But why is that an argument against his position. We already know that they disagree.
*****
From the W05 Q/A
Is there an example of a moral agent that is not a moral patient?
It’s going to depend on what you think constitutes the basis for the two statuses. Norcross, for instance, is of the opinion that they have entirely different basis, which means that it’s possible for a being to be one without being the other, at least that’s conceivable. Suppose capacity for moral reasoning is the basis for moral agency, while capacity to suffer (i.e., feel pain) is the basis for moral patiency, then, if you have a being that’s able to engage in moral reasoning but unable to suffer, then you have something that’s a moral agent but not a moral patient. Perhaps an Artificial Intelligence in the near future will be like this. Norcross’ position is, of course, not the only possible one–he was directly arguing against people who believe that moral patiency more closely tied to moral agency such that if you are an agent, you are also a patient.