Workshop In Honour of John Williams: To Celebrate His Retirement From SMU

Workshop In Honour of John Williams:To Celebrate His Retirement From SMU

Date: 30 May 2018, Wednesday
Time: 10am to 4pm
Venue: Philosophy Resource Room (AS3-05-23)

10am to 11.30am : John Williams, SMU, “Once You Think You’re Wrong, You Must Be Right: New Versions of the Preface Paradox”

Some take the view that the so-called ‘preface paradox’ shows that rationality may allow you to have inconsistent beliefs, in contradiction of orthodox views of justification. Here argue for the conclusion that rationality may require you, as a real human thinker, to have inconsistent beliefs, even when you recognize the inconsistency. Perhaps the most vigorous opposition to my conclusion comes from classical and insightful objections by Doris Olin. After preliminary clarification, I first discuss three versions of the paradox. These are Makinson’s Original Version, my World Capitals and Olin’s Fallibility. For each version, I consider objections that my conclusion does not get established. None of the three versions is entirely free from objection. I show that there is an important mistake in Makinson’s logic that seems to have long gone unnoticed, with the result that his original case is one in which your beliefs are not inconsistent. The case may be modified to evade this difficulty but then it is doubtful that it is realistic. World Capitals avoids this difficulty but is vulnerable to Olin’s objection—one that she makes against her own version, Fallibility—that accepting the possibility of justified inconsistent beliefs saddles you with a pair of justified beliefs that are in explicit contradiction. However I present Modesty, a version that supposes that you believe that at least one of your beliefs (excluding this) is false. I argue that this version escapes all the objections that could trouble the other versions as well as some interesting general objections that Olin makes. I conclude that this is a living and everyday case in which rationality requires you to have inconsistent beliefs even while you recognize that your beliefs are inconsistent. I also argue more tentatively for the same verdict for Modesty*, a version that supposes that you believe that at least one of your beliefs (including this) is false.

About the Speaker:
John N. Williams (PhD Hull) works primarily in epistemology. He also works in philosophy of language and applied ethics. He has published in Mind, Analysis, the Journal of Philosophy, the Australasian Journal of Philosophy, Synthese, Philosophical Studies, Acta Analytica, Philosophia, Philosophy East and West, American Philosophical Quarterly, Philosophy, Philosophy Compass, the Journal of Philosophical Research, Religious Studies, Theoria, Social Epistemology Review and Reply Collective and Logos and Episteme. He is co-editor of Moore’s Paradox: New Essays on Belief, Rationality and the First Person, Oxford University Press together with Mitchell Green. He researches and teaches in the School of Social Sciences, Singapore Management University and before that taught in the Philosophy Department at the National University of Singapore and was Head of the Unit of Philosophy at the University of the West Indies. In August he will take up a Professorship in Philosophy at Nazarbayev University in Kazakhstan.

11.30am to 12.30pm : Robert Beddor, NUS, “Modal Conditions on Knowledge and Skilled Performance”

In the talk I examine two prominent analyses of knowledge in the current literature. One is a modal analysis, which identifies knowledge with safe belief. The other is a virtue epistemological analysis, which identifies knowledge with a type of apt performance. These two approaches are usually viewed as rivals; this talk offers a path to reconciliation. I outline a new form of virtue epistemology, which combines an analysis of knowledge as skillful performance with a modal analysis of skillfulness. I argue that the resulting view – “Modal Virtue Epistemology” – preserves the main benefits of both analyses.

2pm to 3pm : Tang Weng Hong, NUS, “Moore’s Paradox and Degrees of Belief”

It is absurd to assert or to believe the following:
(1) It’s raining, and I do not believe that it’s raining.
(2) It’s raining, and I believe that it’s not raining.
But is merely assigning a degree of belief greater than 0.5 to (1) or to (2) absurd? I maintain, along with Adler and Armour-Garb (2007), that (a) assigning a degree of belief greater than 0.5 to (1) need not be absurd. But I also maintain that (b) assigning a degree of belief greater than 0.5 to (2) is indeed absurd. What explains this discrepancy? Adler and Armour-Garb think that (a) can be explained by their view that full beliefs are transparent whereas partial beliefs are not. But such a view does not explain (b). In my talk, I consider John’s account of why it is absurd to believe (1) and to believe (2). (See, in particular, ‘Moore’s Paradoxes, Evans’s Principle and Self Knowledge’.) I also consider how John’s account may be supplemented to help us account for the aforementioned discrepancy.


3pm to 4pm : Ben Blumson, NUS, “Knowability and Believability”

Moore’s paradox in belief and Fitch’s paradox of knowability are very closely related – whereas the first concerns whether when truths of the form “p and I don’t believe p” are believable (without absurdity), the second concerns whether truths of the form “p and it’s not known that p” are knowable. In this paper, I consider how and whether responses to Moore’s paradox constrain the correct response to Fitch’s paradox, and vice versa. Finally, I discuss implications for metaphysical anti-realism.

All are welcome

On A Theory of a Better Morality by Geoffrey Sayre-McCord

On A Theory of a Better Morality

Abstract:
Normally, there is a sharp distinction between a better theory of X and a theory of a better X. That the theory of a better X is a theory according to which things are different from the way one’s (so far) best theory says they are is (normally) no reason whatsoever to think one’s (so far) best theory is wrong, just reason to wish X were different (and, if it is possible, reason to work to change X). That it would be better if all everyone were treated as equals is no reason whatsoever to think that they are; that it would be better that death came quickly, painlessly, and late in life is no reason whatsoever to think it does; that it would be better if we could fly is no reason whatsoever to think that we can…
In contrast (I maintain) when the subject matter is normative, this normally sharp distinction is elided and the difference between one’s theory of the best X (the best morality, the best standards of inference, the best rules of justification…) and one’s (so far) best theory of X necessarily provides a reason (though perhaps not a decisive reason) to think one’s (so far) best theory is wrong.
The elision plays an essential role in a range of arguments concerning morality, practical rationality, and theoretical rationality, a few of which I discuss. Yet it smacks of depending crucially and unacceptably on wishful thinking – on supposing that the fact that things would be better if only they were a certain way provides some reason to think they are that way. As a result, it invites invocation of a restricted defense of “Wouldn’t it be nice that p, therefore p” reasoning. I think that the invitation should be resisted. The elision is to be defended, I argue, as a reflection of a constraint on acceptable normative theories that is itself explained by a distinctive characteristic of normative concepts that sets them all apart from descriptive concepts.

Date: 28 May 2018, Monday
Time: 2pm to 4pm
Venue: Philosophy Resource Room (AS3-05-23)

About the Speaker:
Geoffrey Sayre-McCord works at the University of North Carolina at Chapel Hill, where he is the Morehead-Cain Alumni Distinguished Professor of Philosophy and the Director of the University’s Philosophy, Politics, and Economics Program. Professor Sayre-McCord works primarily in metaethics, moral theory, and the history of moral philosophy.

All are welcome

Most Counterfactuals Are Still False by Alan Hajek

Most Counterfactuals Are Still False

Abstract:
I have long argued for a kind of ‘counterfactual skepticism’: most counterfactuals are false. I maintain that the indeterminism and indeterminacy associated with most counterfactuals entail their falsehood. For example, I claim that these counterfactuals are both false:
(Indeterminism) If the chancy coin were tossed, it would land heads.
(Indeterminacy) If I had a son, he would have an even number of hairs on his head at his birth.
And I argue that most counterfactuals are relevantly similar to one or both of these, as far as their truth-values go. I also have arguments from the incompatibility of ‘would’ and ‘might not’ counterfactuals, and from Heim (‘reverse Sobel’) sequences.
However, counterfactual reasoning seems to play an important role in science, and ordinary speakers judge many counterfactuals that they utter to be true. A number of philosophers have defended our judgments against counterfactual skepticism. David Lewis and others appeal to ‘quasi-miracles’; Robbie Williams to ‘typicality’; John Hawthorne and H. Orri Stefánsson to ‘counterfacts’, primitive counterfactual facts; Moritz Schulz to an arbitrary-selection semantics; Jonathan Bennett and Hannes Leitgeb to high conditional probabilities; Karen Lewis to contextually-sensitive ‘relevance’.
I argue against each of these proposals. A recurring theme is that they fail to respect certain valid inference patterns. I conclude: most counterfactuals are still false.

Date: 25 May 2018, Friday
Time: 2pm to 4pm
Venue: Philosophy Resource Room (AS3-05-23)

About the Speaker:
Alan Hájek studied statistics and mathematics at the University of Melbourne (B.Sc. (Hons). 1982), where he won the Dwight Prize in Statistics. He took an M.A. in philosophy at the University of Western Ontario (1986) and a Ph.D. in philosophy at Princeton University (1993), winning the Porter Ogden Jacobus fellowship. He has taught at the University of Melbourne (1990) and at Caltech (1992-2004), where he received the Associated Students of California Institute of Technology Teaching Award (2004). He has also spent time as a visiting professor at MIT (1995), Auckland University (2000), and Singapore Management University (2005). Hájek joined the Philosophy Program at RSSS, ANU, as Professor of Philosophy in February 2005. He is a Fellow of the Australian Academy of the Humanities. He was the President of the Australasian Association of Philosophy, 2009-10.
Hájek’s research interests include the philosophical foundations of probability and decision theory, epistemology, the philosophy of science, metaphysics, and the philosophy of religion. His paper “What Conditional Probability Could Not Be” won the 2004 American Philosophical Association Article Prize for “the best article published in the previous two years” by a “younger scholar”. The Philosopher’s Annual selected his “Waging War on Pascal’s Wager” as one of the ten best articles in philosophy in 2003.

All are welcome