I may if I might, but I can't so I won't

Sun 27 Nov 2005 01:30 PM

I've been thinking about Roger White's essay `Epistemic Permissiveness' (available on his website), and I have an argument that I want to try out.

Permissive cases, in White's jargon, are ones in which it would be possible for two agents with the same evidence and background knowledge to disagree about the matter at hand; ie, in which it is compatible with rationality to believe P and equally compatible with rationality to believe not-P instead. Epistemic permissiveness is the doctrine that there are some permissive cases.

White's arguments aim to uncover a kind of deliberative irrationality in permissive cases. Consider a schematic example: Before I collect evidence about some contingent matter P, I do not have a believe that P or that not-P. I make some observations, consult some experts, and so on. The evidence that I collect leads me to believe P. If I know that this case is a permissive case, however, I know that I might rationally have come to believe not-P on the basis of the same evidence. Whether I believe P or not-P depends on the way in which I decide to be rational, not on the force of the evidence. If the difference between believing P and believing not-P just depends on choice or contingency in this way, I might as well have decided which to believe before collecting any evidence.

I have rendered the argument in a ham-handed way, making it look too much like the problem in my Peirce paper. However, I think that what I say below goes through even for White's more subtle formulation of the problem.

White admits that the deliberative incoherence evaporates if any agent cannot judge-- except perhaps retrospectively-- that a situation they are in is a permissive case. It would then be impossible for my belief that P to be undermined by rumination about the fact that I might rationally have believed not-P instead. He dismisses this approach:

...while this position may be coherent and escape the objections thus far, I doubt that anyone holds such a view, as it is hard to see what could motivate it. (p. 10)

Many in philosophy of science have been tempted to say that rationality is a feature of epistemic communities and not of isolated individuals. I will sketch a mild version of this claim, as advanced by Philip Kitcher in the 90s and Helen Longino in her more staid moments; I do not need the more revolutionary versions advocated by Lynn Hankinson-Nelson, Kitcher after the millenium, and Longino in her wilder moments.

In discovering what the world is like, there are pressures to reason in different ways. Some discoveries would never be made if we did not leap to bold new hypotheses, but sometimes leaping would lead us down blind-alleys and into theoretical box canyons. The rational thing to do is to spread the epistemic risk: Have some scientists pursue wild theories while others defend orthodoxy. Promising new leads will be followed up by someone, and the community will follow along only once a critical mass of evidence has been gathered in their favor. Call this the collective strategy for scientific development.

A further fact about human agents is that we are better at exploring new theories if we believe they might be true and we are better at defending orthodoxy if we believe that the challenging view is false. This means that the collective strategy requires some people (the pioneers) to believe P and others (the old guard) to believe not-P, even when confronted by the same evidence and arguments. The collective strategy yields permissive cases.

Permissive cases occur only around legitimate scientific controversies, so they are not ubiquitous. Moreover, it will only be clear in retrospect whether the pioneers were heading to a new frontier or down a dead end. Deliberation at the time cannot be undermined by considering that this is a permissive case. This view seems to be the kind of view that White considers coherent but unoccupied. And someone holds such a view-- namely me, at least some of the time.