Sun 16 Jul 2006 03:13 PM
I wrote most of this entry a couple of weeks ago, after Brian Weatherson pointed to the article in question. Something else came up, so I saved it and moved on. Today I went back, cleaned it up, and posted it.
In a recent paper in Analysis [July 2006, 179-187], Philip Pettit considers the question of whether or not one should acquiesce to the opinion of the majority. He considers three cases, but two are sufficient for the points below.
Case A: Joe is one of many witnesses to an auto accident.* When he saw it, he thought the driver ran a red light. Many other witnesses say that the light was green. Should Joe defer to the majority of reports and conclude that the light was indeed green?
Case B: Joe believes that intelligent design is the best explanation for the existence of order in the universe. The majority of people say otherwise. Should Joe defer to the majority and conclude that intelligent design is hokum?
Pettit suggests that our answer is 'yes' in Case A, but 'no' in Case B. His paper aims to explain and justify the asymmetry.
In a subsequent section, Petit offers a schematic situation. Paraphrasing a bit:
Joe is one of many people who face a given question. Joe and the rest are "equally intelligent, equally informed and equally impartial." Joe disagrees with the answer given by most of the others. Joe knows all of this to be true.Now, should Joe change his opinion?
Pettit offers the obvious argument for the 'yes' answer: If each person has some independent probability (greater than 1/2) of getting the right answer, then one would be more likely to get the right answer by trusting the majority than the minority. In the limit of large population, probability that the majority will get the right answer approaches one.
This argument gives a 'yes' answer to any instances of the schematic situation. Rather than rebut it, Pettit looks elsewhere for an asymmetry. The difference between Case A and Case B, he suggests, is that the belief that the light was green was closer to the periphery of Joe's web of belief than the belief in intelligent design. Because the latter belief is deeply embedded in his other beliefs, Joe would have to decide if and how to update his other beliefs after deferring to the majority opinion.
Pettit surveys various ways that Joe might try to update his beliefs. If Joe accepts the majority opinion about several questions all at once, then he might end up with inconsistent beliefs. If Joe accepts the majority opinion about one matter, lets that effect his degree of belief, considers the majority opinion regarding a second question, and so on, then the outcome will depend on which question Joe considers first. Since possible inconsistency and path-dependency are to be avoided, Pettit concludes, we should say 'no' in cases where the beliefs are deeply embedded.
Pettit's arguments for inconsistency or path-dependence if Joe defers proceed simply in terms of Joe's beliefs about p, q, and p&q. As such, I suspect that the arguments do not really discriminate between core and peripheral beliefs. Peripheral beliefs can still enter into conjunctions. Admittedly, this suspicion is not an argument; but it does suggest that embeddedness can't explain the asymmetry between Cases A and B.
One real distinction between Case A from Case B is much simpler: Case B is not plausibly an instance of Pettit's general schema, and so the initial argument for a 'yes' answer does not apply. We know that debates about intelligent design do not involve people who are "equally intelligent, equally informed and equally impartial." Both sides would agree on this, although for different reasons; believers in science see the ID crowd as creationist yahoos, and the yahoos portray us as being in the grip of a priori naturalism. Regardless of whatever might be stipulated about Joe and his interlocutors, our background knowledge shapes our intuitions about Case B.
Moreover, path dependence can result if Joe defers in Case A. Suppose there are a dozen witnesses who are evenly divided as to whether the light was red or green. Three of them compare notes before being questioned. Merely as a matter of chance, one of these three will be in the minority. Since the perceptual belief is far from the center of her web, she defers to the other two. Things continue until Joe has a chance to compare notes with his fellow witnesses. By that time, a majority favors one view or the other. This is not quite the same path dependence that worries Pettit; it is not relative to Joe's personal history, but relative to the history of the community. Nevertheless, it is enough to discredit the strategy of deference even for beliefs that are not deeply embedded.
This might seem to be an argument for 'no' in Case A, which would be in tension with the general argument for 'yes' that Pettit begins with. I think that this is just a result of the way the problem is represented. Before Joe discusses the accident with others, he believes that it looked to him that the light was red. Whether he defers to them or not, he should not change his belief about that. Rather, he might change his belief about whether the light was red. Similarly, if he listens to other witness' description of the accident, he will not defer on the basis of their beliefs that the light was green. Rather, he is interested in their reports about whether it looked to them as if the light was red. With this distinction in mind, the cascade to agreement would not occur.
Perhaps there is no such distinction in Case B. For non-perceptual beliefs, one might say, there is no clear distinction between saying how it seemed and judging how it was.** Of course, perceptual beliefs are less embedded in the web of belief. So this would just be Pettit's distinction again.
* He puts the scenarios in the second person, but I have shifted to the third person Joe. It would be presumptuous to stipulate your opinions about intelligent design in Case B.
** I have phrased this as a hypothetical because I am dubious of it. Even if detachment is permitted for some beliefs, it seems like scientific controversies require remembering which evidence one took to be persuasive. As such, Joe should distinguish between having believed a theory on the basis of some evidence and later disbelieving it because his clever friends do. I can distinguish between my sense of a scientific theory based on my (meager) understanding of the evidence and my sense based on what competent scientists tell me; I have beliefs about both, but if asked for a flat-footed judgment about the theory I would probably defer and give the latter.