Alarm clock belief change

Mon 30 Nov 2009 08:31 AM

Suppose I wake up one morning and find that I believe something (call it Q) that I had not believed before. Of course, this might happen if I discover some new evidence for Q when I wake up; for example, Q might be 'There is a dog in the street' and I am woken up by its barking. It may also happen that I become aware of a possibility or a consequence of some other belief that I have; inspiration steals over me in the morning. These are all cases where I might be said to have a new reason for believing that I did not have before. Yet sometimes I just find that a belief is more appealing to me when I wake up, with no new reason in its favor.

In more formal terms, we can ask when it is rationally permissible to adjust ones degree of belief in Q. Bayesians have lots to say about revising the credence in Q in light of new evidence. They have less to say about, but nonetheless acknowledge, revising the credence in Q when one discovers a new theoretical possibility that one had not imagined before. (For example: There is a theory T, T has significant consequences for Q, and one had not previously had any a degree of belief in T.) Yet the case I am considering does not involve new evidence or new possibilities. Bayesians would condemn me as irrational, I think.

For subjective Bayesians, rational credence depends on evidence and the space of possibilities along with one's prior probabilities. And one's priors are beyond rational scrutiny. This doesn't seem to make a difference for my case, however, because waking up in the morning is not my first moment as a rational agent. I had credences when I went to bed. In order for conditionalization to have any teeth as a diachronic constraint, Bayesians must say that I ought not shuffle around my credences when I wake up.

Let's generalize a bit. Consider an arbitrary permissive account of rationality; that is, an account which says that two agents in relevantly similar situations might have different beliefs while both still being rational. Suppose that it is rationally permissible in my situation to believe either Q or not-Q. (This might instead be expressed in terms of degrees of belief by supposing that it is rationally permissible to assign different degrees of belief to Q.)

The mere fact that I might rationally believe Q and might rationally believe not-Q is sometimes taken as a sign that I ought to suspend judgement. Roger White gives an argument to this effect. In an old blog post, I answer the argument in this way: The community's ability to generate true beliefs will (in some cases) be furthered by having some members believe Q while others believe not-Q. So rationality should not require us all to suspend judgement, lest the community (and so all of us) end up worse off.

If my reply to White's argument works, then it is OK for me to believe Q when my equally rational counterpart believes not-Q. Nothing in my old post shows that it's OK for me to switch sides, however. So I might still be irrational to change my mind about Q as I wake up in the morning.

In order to reap the advantage of the disagreement allowed by permissive ratianality, the community must be organized so that some of us will believe Q and some us will believe not-Q. Yet the population constraints are probably not so precise that one person more or less on either side will sway the outcome. So my changing my mind as I wake up is not obviously irrational.

I am unsure what to say, but I'll sleep on it.